I’m heading out the door in a few minutes to take the train to Zurich and a (thankfully direct) flight from there to San Francisco. I’ll have time on the flight to write up the next part in the series, so all will be in place for this weekend’s VR Hackathon.
In today’s post we’re going to extend the implementation we saw yesterday (and introduced on Monday) by adding full-screen viewing and device-tilt navigation.
Full-screen mode is easy: I borrowed some code from here that works well, the only thing to keep in mind is that the API can only be called in a UI event handler (such as when someone has pressed a button). This is clearly intended to stop naughty pages from forcing you into full-screen mode on load. So we’re adding a single, huge “Start” button to launch the viewer. Nothing particularly interesting, although we do hide – and change the Z-order on – some divs to make an apparently multi-page UI happen via a single HTML file. We’ll extend this approach in tomorrow’s post to show more buttons, one for each hosted model.
Device-tilt support is only a little more involved: the window has a ‘deviceorientation’ event we can listen to that gives us alpha/beta/gamma values representing data coming from the host device’s sensors (presumably the accelerometer and magnetometer). These appear to be given irrespective of the actual orientation (meaning whether it’s in portrait or landscape mode). We’re only interested in landscape mode, so we need to look at the alpha value for the horizontal (left-right) rotation and gamma for the vertical (front-back) rotation. The vertical rotation can be absolute, but we want to fix the left-right rotation based on an initial direction – horizontal rotations after that should be relative to that initial direction.
The HTML page hasn’t changed substantially – it has some additional styles, but that’s about it.
Here are the relevant additions to the referenced JavaScript file (I’ve omitted the UI changes and the event handler subscription – you can get the full source here).
function orb(e) {
if (e.alpha && e.gamma) {
// Remove our handlers watching for camera updates,
// as we'll make any changes manually
// (we won't actually bother adding them back, afterwards,
// as this means we're in mobile mode and probably inside
// a Google Cardboard holder)
unwatchCameras();
// Our base direction allows us to make relative horizontal
// rotations when we rotate left & right
if (!baseDir)
baseDir = e.alpha;
if (viewerLeft.running && viewerRight.running) {
var deg2rad = Math.PI / 180;
// gamma is the front-to-back in degrees (with
// this screen orientation) with +90/-90 being
// vertical and negative numbers being 'downwards'
// with positive being 'upwards'
var vert = (e.gamma + (e.gamma <= 0 ? 90 : -90)) * deg2rad;
// alpha is the compass direction the device is
// facing in degrees. This equates to the
// left - right rotation in landscape
// orientation (with 0-360 degrees)
var horiz = (e.alpha - baseDir) * deg2rad;
orbitViews(vert, horiz);
}
}
}
function orbitViews(vert, horiz) {
// We'll rotate our position based on the initial position
// and the target will stay the same
var pos = new THREE.Vector3().copy(leftPos);
var trg = viewerLeft.navigation.getTarget();
// Start by applying the left/right orbit
// (we need to check the up/down value, though)
var zAxis = new THREE.Vector3(0, 0, 1);
pos.applyAxisAngle(zAxis, (vert < 0 ? horiz + Math.PI : horiz));
// Now add the up/down rotation
var axis = new THREE.Vector3().subVectors(pos, trg).normalize();
axis.cross(zAxis);
pos.applyAxisAngle(axis, vert);
// Zoom in with the lefthand view
zoom(viewerLeft, pos.x, pos.y, pos.z, trg.x, trg.y, trg.z);
// Get a camera slightly to the right
var pos2 = offsetCameraPos(viewerLeft, pos, trg, true);
// And zoom in with that on the righthand view, too
var up = viewerLeft.navigation.getCameraUpVector();
zoom(
viewerRight,
pos2.x, pos2.y, pos2.z,
trg.x, trg.y, trg.z,
up.x, up.y, up.z
);
}
So how can we test this? Obviously with a physical device – and I recommend using Chrome on an Android device for best results – or you can choose to use Google Chrome Canary on your PC (whether Mac or Windows). Canary is the codename for the next version of Chrome that’s currently in Beta: I don’t actually know whether the next release is always called Canary, or whether this changes. As you can probably tell, this is the first time I’ve installed it. :-)
Canary currently includes some very helpful developer tools that go beyond what’s in the current stable release of Chrome (which at the time of writing is version 38.0.2125.101 for me, at least). The version of Chrome Canary I have installed is version 40.0.2185.0.
Here’s the main page loaded in Chrome Canary with the enhanced developer tools showing:
The important part is the bottom-right pane which includes sensor emulation information. For more information on enabling this (which you do via the blue “mobile device” icon at the top, next to the search icon) check the online Chrome developer docs.
You can either enter absolute values – which is in itself very handy – or grab onto the device and wiggle it around (which helps emulate more realistic device usage, I expect).
Again, here’s the page for you to try yourself.
In tomorrow’s post we’ll extend this implementation to look at other models, refactoring some of the UI and viewer control code in the process.