October 2014

Sun Mon Tue Wed Thu Fri Sat
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31  










October 31, 2014

TEDxCERN videos are now online

The videos from the TEDxCERN event I attended in late September are now online.

As I’m busy working on my AU 2014 presentations, here’s a selection of the ones I think will be of most interest to this audience. You can always go through the full playlist on YouTube, of course: they are all worth watching.


The surprising strengths of materials in the nanoworld | Julia Greer


Is a vaccine for cancer possible? | Sonia Trigueros


The weirdness of water could be the answer | Marcia Barbosa


When a tree calls for help | Topher White


Remote heart diagnosis through digital tablets | Arthur Zang


Energy storage under pressure | Danielle Fong


Taking the fingerprints of the universe | Julien Lesgourgues


How I built a nuclear reactor at the age of 13 | Jamie Edwards


Performance: Quantum Music | Nitin Sawhney


Performance: the LHC remix | Tim Exile

October 29, 2014

And the winner is…

Win-Win SituationThanks to all of you who contributed responses to last week’s “guess the dashboard” competition. I had a lot of fun seeing the responses roll in!

I didn’t actually expect anyone to get the “right” answer – as the service the dashboard is monitoring hadn’t been announced publicly, at the time – but I was impressed by the Autodesk employees who used their knowledge to try and get a free t-shirt – that’s the kind of initiative that helped build this company into what it is today ;-). With 20-20 hindsight I probably should have specifically excluded Autodeskers from participating. I’ll try to remember that for next time.

I had a hard time choosing the winning entry… it was pretty clear it was going to be based on humour rather than reality, and there were a few really good ones. In the end I chose the entry by Alex Stenz – partly because I’m a huge fan of Douglas Adams and it’s been 30 years since the Hitchhiker’s Guide to the Galaxy was published. (For those Douglas Adams fans who haven’t seen it, you might want to try the BBC’s 30th anniversary HGttG game. I do love a good text adventure.) There’s also something oddly compelling about the idea of using AutoCAD I/O to calculate the meaning of life, the universe and everything.

On the topic of possible uses for AutoCAD I/O, do start thinking about what you might use the service for. I was part of an internal brainstorming session with members of the AutoCAD I/O team, last week, and there were a number of great suggestions for potential services (whether developed internally and shared via a global Activity library or developed externally). I think it’d be great to extend this discussion to a virtual brainstorm via a blog post. But that’s for another day – I want to present some possibilities in the post itself, to get the discussion started.

Congratulations, Alex – I’ll get in touch by email to see whether one of the t-shirt sizes I have will work for you. :-)

photo credit: garryknight via photopin cc

October 27, 2014

Adding speech recognition to our stereoscopic Google Cardboard viewer

Speech recognition, not at its bestI nearly named this post “Creating a stereoscopic viewer for Google Cardboard using the Autodesk 360 viewer – Part 4”, leading on from the series introduction and then parts 1, 2 & 3. But then I decided this topic deserved it’s very own title. :-)

The seed for this post was sown during the VR Hackathon, at the beginning of which I had an inspiring chat with Theo Armour. Not only does Theo have a name worthy of a gladiator – and it turns out there is a list of gladiator names on the Internet, just one more reason I love it – he has an inspiring view of technology and what it can bring us. Jeremy Tammik collaborated with Theo at the recent AEC Hackathon in New York, so I’m sure he knows what I’m talking about.

Anyway, firstly it turns out Theo is the person behind jaanga.com, and it was he who put together the template that got me started with Google Cardboard and the Autodesk Viewing & Data service. So I already owe Theo a debt of thanks for that.

Secondly, Theo has been thinking about where to go next with VR, specifically with regards to user input. A problem with holding a set of goggles in your hands is that it’s hard to do very much with them otherwise. Google Cardboard does have a “button” on the side, which is really just a movable washer connected to a fixed magnet that influences the phone’s magnetometer, but as you can only access that from a native Android app – not an HTML page – then it’s basically useless for our purposes.

I’d been looking at Leap Motion to help with this, which implies having one or more hands free but also adds a platform dependency: not only is their mobile SDK currently Android-specific, it’s also supported on a limited set of devices with sufficiently powerful processors, such as the Nexus 5. I’m still planning on pre-ordering a Nexus 6 and getting it working with that, but I’m also keen to move things forward in the meantime and consider solutions that don’t reduce the possible audience for this application.

Theo was clearly very excited about the potential for getting access to speech recognition in HTML5 apps. My initial reaction was “wow – surely you can’t do that from HTML5!?!” but Theo was keen to pursue this direction. Before I left San Francisco, Theo very kindly invited me for a nice apéro at the ferry building – I was taking the ferry back to Marin on Wednesday afternoon – where he unveiled a working prototype of his HTML5 viewer with functioning speech recognition. Too cool!

Theo’s demo app makes use of annyang, a simple JavaScript API that sits on top of the HTML5 Speech Recognition API that it turns out is exposed by most modern browsers. Who knew?

So I went and shamelessly copied Theo’s approach, extending it for the Autodesk 360 viewer sample. I initially focused on implementing commands such as “explode”, “combine”, and zooming “in” and “out” – as well as “reset” and “reload” – but I also managed to find a way to make it work with the command definitions used to create our UI buttons for the front page. So you can also now switch models by saying the name of the model you want to load. A very handy enhancement.

It’s worth noting that it’s really best to load the first model via the UI – this allows us to force the page to fullscreen, as some UI interaction is needed for that – but after that you can simply use speech to load subequent models.

Google Chrome does keep asking for permission to access the microphone, which is a little annoying, but it turns out that loading the page via “https” allows the browser to remember this. You just get the occasional beep, which is rather less annoying.

The interesting part of the HTML app is, as usual, the JavaScript code. So here that is:

var viewerLeft, viewerRight;

var updatingLeft = false, updatingRight = false;

var leftLoaded, rightLoaded, cleanedModel;

var leftPos, baseDir, upVector, initLeftPos;

var initZoom;

var expFac = 0, exp = 0;

var targExp = 0.5, xfac = 0.05, zfac = 0.3;

var direction = true;

 

var buttons = {

  'robot arm' : function () {

    launchViewer(

      'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL1JvYm90QXJtLmR3Zng='   

    );

  },

  'front loader' : function () {

    launchViewer(

      'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL0Zyb250JTIwTG9hZGVyLmR3Zng=',

      new THREE.Vector3(0, 0, 1)

    );

  },

  'suspension' : function () {

    launchViewer(

      'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL1N1c3BlbnNpb24uZHdm'

    );

  },

  'house' : function () {

    launchViewer(

      'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL2hvdXNlLmR3Zng='

    );

  },

  'V8 engine' : function () {

    launchViewer(

      'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL1Y4RW5naW5lLnN0cA=='

    );

  },

  'morgan' : function () {

    launchViewer(

      'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL1NwTTNXNy5mM2Q=',

      new THREE.Vector3(0, 0, 1),

      function () {

        zoom(

          viewerLeft,

          -48722.5, -54872, 44704.8,

          10467.3, 1751.8, 1462.8

        );

      }

    );

  }

}

 

var commands = {

  'explode': function () {

    if (checkViewers()) {

      expFac = expFac + 1;

      explode(true);

    }

  },

  'combine': function () {

    if (checkViewers()) {

      if (expFac > 0) {

        expFac = expFac - 1;

        explode(false);

      }

    }

  },

  'in': function () {

    if (checkViewers()) {

      zoomInwards(-zfac);

    }

  },

  'out': function () {

    if (checkViewers()) {

      zoomInwards(zfac);

    }

  },

  'reset': function () {

    if (checkViewers()) {

      expFac = 0;

      explode(false);

 

      if (initLeftPos) {

        var trg = viewerLeft.navigation.getTarget();

        var up = viewerLeft.navigation.getCameraUpVector();

 

        leftPos = initLeftPos.clone();

        zoom(

          viewerLeft,

          initLeftPos.x, initLeftPos.y, initLeftPos.z,

          trg.x, trg.y, trg.z, up.x, up.y, up.z

        );

      }

    }

  },

  'reload': function () {

    location.reload();

  }

};

 

function initialize() {

 

  // Populate our initial UI with a set of buttons, one for each

  // function in the Buttons object

 

  var panel = document.getElementById('control');

  for (var name in buttons) {

    var fn = buttons[name];

 

    var button = document.createElement('div');

    button.classList.add('cmd-btn');

 

    // Replace any underscores with spaces before setting the

    // visible name

 

    button.innerHTML = name;

    button.onclick = (function (name) {

      return function () { name(); };

    })(fn);

 

    // Add the button with a space under it

 

    panel.appendChild(button);

    panel.appendChild(document.createTextNode('\u00a0'));

  }

 

  if (annyang) {

 

    // Add our buttons and commands to annyang

 

    annyang.addCommands(buttons);

    annyang.addCommands(commands);

 

    // Start listening

 

    annyang.start();

  }

}

 

function checkViewers() {

  if (viewerLeft && viewerRight)

    return viewerLeft.running && viewerRight.running;

  return false;

}

 

function launchViewer(docId, upVec, zoomFunc) {

 

  // Reset some variables when we reload

 

  if (viewerLeft) {

    viewerLeft.uninitialize();

    viewerLeft = null;

  }

  if (viewerRight) {

    viewerRight.uninitialize();

    viewerRight = null;

  }

  updatingLeft = false;

  updatingRight = false;

  leftPos = null;

  baseDir = null;

  upVector = null;

  initLeftPos = null;

  initZoom = null;

  expFac = 0;

  exp = 0;

  direction = true;

 

  // Assume the default "world up vector" of the Y-axis

  // (only atypical models such as Morgan and Front Loader require

  // the Z-axis to be set as up)

 

  upVec =

    typeof upVec !== 'undefined' ?

      upVec :

      new THREE.Vector3(0, 1, 0);

 

  // Ask for the page to be fullscreen

  // (can only happen in a function called from a

  // button-click handler or some other UI event)

 

  requestFullscreen();

 

  // Hide the controls that brought us here

 

  var controls = document.getElementById('control');

  controls.style.visibility = 'hidden';

 

  // Bring the layer with the viewers to the front

  // (important so they also receive any UI events)

 

  var layer1 = document.getElementById('layer1');

  var layer2 = document.getElementById('layer2');

  layer1.style.zIndex = 1;

  layer2.style.zIndex = 2;

 

  // Store the up vector in a global for later use

 

  upVector = upVec.clone();

 

  // The same for the optional Initial Zoom function

 

  initZoom =

    typeof zoomFunc !== 'undefined' ?

      zoomFunc :

      null;

 

  // Get our access token from the internal web-service API

 

  $.get(

    window.location.protocol + '//' +

    window.location.host + '/api/token',

    function (accessToken) {

 

      // Specify our options, including the provided document ID

 

      var options = {};

      options.env = 'AutodeskProduction';

      options.accessToken = accessToken;

      options.document = docId;

 

      // Create and initialize our two 3D viewers

 

      var elem = document.getElementById('viewLeft');

      viewerLeft = new Autodesk.Viewing.Viewer3D(elem, {});

 

      Autodesk.Viewing.Initializer(options, function () {

        viewerLeft.initialize();

        loadDocument(viewerLeft, options.document);

      });

 

      elem = document.getElementById('viewRight');

      viewerRight = new Autodesk.Viewing.Viewer3D(elem, {});

 

      Autodesk.Viewing.Initializer(options, function () {

        viewerRight.initialize();

        loadDocument(viewerRight, options.document);

      });

    }

  );

}

 

function loadDocument(viewer, docId) {

 

  // The viewer defaults to the full width of the container,

  // so we need to set that to 50% to get side-by-side

 

  viewer.container.style.width = '50%';

  viewer.resize();

 

  // Let's zoom in and out of the pivot - the screen

  // real estate is fairly limited - and reverse the

  // zoom direction

 

  viewer.navigation.setZoomTowardsPivot(true);

  viewer.navigation.setReverseZoomDirection(true);

 

  if (docId.substring(0, 4) !== 'urn:')

    docId = 'urn:' + docId;

 

  Autodesk.Viewing.Document.load(docId,

    function (document) {

 

      // Boilerplate code to load the contents

 

      var geometryItems = [];

 

      if (geometryItems.length == 0) {

        geometryItems =

          Autodesk.Viewing.Document.getSubItemsWithProperties(

            document.getRootItem(),

            { 'type': 'geometry', 'role': '3d' },

            true

          );

      }

      if (geometryItems.length > 0) {

        viewer.load(document.getViewablePath(geometryItems[0]));

      }

 

      // Add our custom progress listener and set the loaded

      // flags to false

 

      leftLoaded = rightLoaded = cleanedModel = false;

      viewer.addEventListener('progress', progressListener);

    },

    function (errorMsg, httpErrorCode) {

      var container = document.getElementById('viewerLeft');

      if (container) {

        alert('Load error ' + errorMsg);

      }

    }

  );

}

 

// Progress listener to set the view once the data has started

// loading properly (we get a 5% notification early on that we

// need to ignore - it comes too soon)

 

function progressListener(e) {

 

  // If we haven't cleaned this model's materials and set the view

  // and both viewers are sufficiently ready, then go ahead

 

  if (!cleanedModel &&

    ((e.percent > 0.1 && e.percent < 5) || e.percent > 5)) {

 

    if (e.target.clientContainer.id === 'viewLeft')

      leftLoaded = true;

    else if (e.target.clientContainer.id === 'viewRight')

      rightLoaded = true;

 

    if (leftLoaded && rightLoaded && !cleanedModel) {

 

      if (initZoom) {

 

        // Iterate the materials to change any red ones to grey

 

        // (We only need this for the Morgan model, which has

        // translation issues from Fusion 360... which is also

        // the only model to provide an initial zoom function)

 

        for (var p in viewerLeft.impl.matman().materials) {

          var m = viewerLeft.impl.matman().materials[p];

          if (m.color.r >= 0.5 && m.color.g == 0 && m.color.b == 0) {

            m.color.r = m.color.g = m.color.b = 0.5;

            m.needsUpdate = true;

          }

        }

        for (var p in viewerRight.impl.matman().materials) {

          var m = viewerRight.impl.matman().materials[p];

          if (m.color.r >= 0.5 && m.color.g == 0 && m.color.b == 0) {

            m.color.r = m.color.g = m.color.b = 0.5;

            m.needsUpdate = true;

          }

        }

 

        // If provided, use the "initial zoom" function

 

        initZoom();

      }

 

      setTimeout(

        function () {

          initLeftPos = viewerLeft.navigation.getPosition();

 

          //TOREMOVE

          //viewerLeft.autocam.setCurrentViewAsFront();

 

          transferCameras(true);

        },

        500

      );

 

      watchTilt();

 

      cleanedModel = true;

    }

  }

  else if (cleanedModel && e.percent > 10) {

 

    // If we have already cleaned and are even further loaded,

    // remove the progress listeners from the two viewers and

    // watch the cameras for updates

 

    unwatchProgress();

 

    watchCameras();

  }

}

 

function requestFullscreen() {

 

  // Must be performed from a UI event handler

 

  var el = document.documentElement,

      rfs =

        el.requestFullScreen ||

        el.webkitRequestFullScreen ||

        el.mozRequestFullScreen;

  rfs.call(el);

}

 

// Add and remove the pre-viewer event handlers

 

function watchCameras() {

  viewerLeft.addEventListener('cameraChanged', left2right);

  viewerRight.addEventListener('cameraChanged', right2left);

}

 

function unwatchCameras() {

  viewerLeft.removeEventListener('cameraChanged', left2right);

  viewerRight.removeEventListener('cameraChanged', right2left);

}

 

function unwatchProgress() {

  viewerLeft.removeEventListener('progress', progressListener);

  viewerRight.removeEventListener('progress', progressListener);

}

 

function watchTilt() {

  if (window.DeviceOrientationEvent)

    window.addEventListener('deviceorientation', orb);

}

 

// Event handlers for the cameraChanged events

 

function left2right() {

  if (!updatingRight) {

    updatingLeft = true;

    transferCameras(true);

    setTimeout(function () { updatingLeft = false; }, 500);

  }

}

 

function right2left() {

  if (!updatingLeft) {

    updatingRight = true;

    transferCameras(false);

    setTimeout(function () { updatingRight = false; }, 500);

  }

}

 

// And for the deviceorientation event

 

function orb(e) {

 

  if (e.alpha && e.gamma) {

 

    // Remove our handlers watching for camera updates,

    // as we'll make any changes manually

    // (we won't actually bother adding them back, afterwards,

    // as this means we're in mobile mode and probably inside

    // a Google Cardboard holder)

 

    unwatchCameras();

 

    // Our base direction allows us to make relative horizontal

    // rotations when we rotate left & right

 

    if (!baseDir)

      baseDir = e.alpha;

 

    if (checkViewers()) {

 

      var deg2rad = Math.PI / 180;

 

      // gamma is the front-to-back in degrees (with

      // this screen orientation) with +90/-90 being

      // vertical and negative numbers being 'downwards'

      // with positive being 'upwards'

 

      var vert = (e.gamma + (e.gamma <= 0 ? 90 : -90)) * deg2rad;

 

      // alpha is the compass direction the device is

      // facing in degrees. This equates to the

      // left - right rotation in landscape

      // orientation (with 0-360 degrees)

 

      var horiz = (e.alpha - baseDir) * deg2rad;

 

      orbitViews(vert, horiz);

    }

  }

}

 

function transferCameras(leftToRight) {

 

  // The direction argument dictates the source and target

 

  var source = leftToRight ? viewerLeft : viewerRight;

  var target = leftToRight ? viewerRight : viewerLeft;

 

  var pos = source.navigation.getPosition();

  var trg = source.navigation.getTarget();

 

  // Set the up vector manually for both cameras

 

  source.navigation.setWorldUpVector(upVector);

  target.navigation.setWorldUpVector(upVector);

 

  // Get the new position for the target camera

 

  var up = source.navigation.getCameraUpVector();

 

  // Get the position of the target camera

 

  var newPos = offsetCameraPos(source, pos, trg, leftToRight);

 

  // Save the left-hand camera position: device tilt orbits

  // will be relative to this point

 

  leftPos = leftToRight ? pos : newPos;

 

  // Zoom to the new camera position in the target

 

  zoom(

    target, newPos.x, newPos.y, newPos.z, trg.x, trg.y, trg.z,

    up.x, up.y, up.z

  );

}

 

function getDistance(v1,v2) {

  var diff = new THREE.Vector3().subVectors(v1, v2);

  return diff.length();

}

 

function offsetCameraPos(source, pos, trg, leftToRight) {

 

  // Use a small fraction of the distance for the camera offset

 

  var disp = getDistance(pos, trg) * 0.04;

 

  // Clone the camera and return its X translated position

 

  var clone = source.autocamCamera.clone();

  clone.translateX(leftToRight ? disp : -disp);

  return clone.position;

}

 

function orbitViews(vert, horiz) {

 

  // We'll rotate our position based on the initial position

  // and the target will stay the same

 

  var pos = leftPos.clone();

  var trg = viewerLeft.navigation.getTarget();

 

  // Start by applying the left/right orbit

  // (we need to check the up/down value, though)

 

  if (vert < 0)

    horiz = horiz + Math.PI;

 

  var zAxis = upVector.clone();

  pos.applyAxisAngle(zAxis, horiz);

 

  // Now add the up/down rotation

 

  var axis = new THREE.Vector3().subVectors(trg, pos).normalize();

  axis.cross(zAxis);

  pos.applyAxisAngle(axis, -vert);

 

  // Zoom in with the lefthand view

 

  var up = viewerLeft.navigation.getCameraUpVector();

 

  zoom(

    viewerLeft,

    pos.x, pos.y, pos.z,

    trg.x, trg.y, trg.z

  );

 

  // Get a camera slightly to the right

 

  var pos2 = offsetCameraPos(viewerLeft, pos, trg, true);

 

  // And zoom in with that on the righthand view, too

 

  zoom(

    viewerRight,

    pos2.x, pos2.y, pos2.z,

    trg.x, trg.y, trg.z,

    up.x, up.y, up.z

  );

}

 

function explode(outwards) {

  if (outwards != direction)

    direction = outwards;

 

  setTimeout(

    function () {

      exp = exp + (direction ? xfac : -xfac);

      setTimeout(function () { viewerLeft.explode(exp); }, 0);

      setTimeout(function () { viewerRight.explode(exp); }, 0);

      if ((direction && exp < targExp * expFac) ||

        (!direction && exp > targExp * expFac))

        explode(direction);

    },

    50

  );

}

 

function zoomAlongCameraDirection(viewer, factor) {

 

  var pos = leftPos.clone();

  var trg = viewer.navigation.getTarget();

 

  var disp = trg.clone().sub(pos).multiplyScalar(factor);

  pos.sub(disp);

 

  return pos;

}

 

function zoomInwards(factor) {

 

  leftPos = zoomAlongCameraDirection(viewerLeft, factor);

}

 

// Set the camera based on a position, target and optional up vector

 

function zoom(viewer, px, py, pz, tx, ty, tz, ux, uy, uz) {

 

  // Make sure our up vector is correct for this model

 

  viewer.navigation.setWorldUpVector(upVector, true);

 

  viewer.navigation.setView(

    new THREE.Vector3(px, py, pz),

    new THREE.Vector3(tx, ty, tz)

  );

 

  if (ux && uy && uz) {

      var up = new THREE.Vector3(ux, uy, uz);

    viewer.navigation.setCameraUpVector(up);

  }

}

Here’s a video of how it works.


You’ll note that the odd command gets dropped – and that’s in a relatively noise-free environment – but I think you’ll find it’s mostly a very helpful addition to viewer’s feature-set. Thanks again to Theo for the inspiration!

photo credit: Filmstalker via photopincc

Update:

I fixed a logic error in the code: the zoom was being applied to the camera position post tilt transformation, and so would end up being rotated. The above, updated code works much better than the version posted originally.

October 23, 2014

AutoCAD I/O API: a new batch processing web-service

This is really interesting news I’ve been waiting to share for a while, now. And of course it’s the answer to the question I posed in my last post (this is the service the dashboard has been monitoring). Once I get back home to Switzerland I’ll go through the various comments on the post and LinkedIn, to see who wins the prize. :-)

The AutoCAD team has been working hard on a cloud-based batch-processing framework that works with AutoCAD data. The current name for the service is the AutoCAD I/O API – Beta.

Random retro photo of a 36-pin Centronics parallel printer port

The service is powered by AcCore, the cross-platform AutoCAD “Core Engine” that was originally created when we built AutoCAD for Mac, during the “Big Split” project. (A side note: the initial working name for this service was AutoCAD Core Engine Services – or ACES – so don’t be confused if you still see references to that name.)

The service is targeted at offline operations – meaning batch processing or operations that don’t require immediate feedback – which allows us to queue the operations to execute optimally. That said, we’re usually talking about seconds to execute, rather than hours or days. :-)

In essence, the service allows developers to call through to an instance of AcCore – running up there in the cloud – to run an AutoCAD Script to perform operations related to AutoCAD data and then access the results, all through HTTP. Which means, of course, that it can be used from any device that connects to HTTP, which now includes a number of children’s toys. ;-)

That said, as with any authenticated web-service you will need a client ID and key to gain access. You will not want to share this as part of a client-side application, so you’ll need to create a lightweight web-service yourself that handles authentication, just as we saw when developing an application with Autodesk’s first PaaS offering, the Viewing & Data API.

But for testing purposes we won’t worry about that. Our first application – courtesy of my friend and colleague, Albert Szilvasy – is a simple console application that makes use of the client ID and key directly to authenticate against the AutoCAD I/O API and then use it to create a DWG containing a line and output that to PDF. (In case you’re interested in this service’s “bona fides” it is currently being used to service all PDF output requests from AutoCAD 360. And that’s really just the beginning…)

To get this working, create a simple console application project inside Visual Studio. Call it “AutoCADIoSample” – just to make sure the code works when you copy & paste it in – and add a service reference to “https://autocad.io/api/v1” called “AutoCADIo” (you’ll find step-by-step instructions here).

Now you should be ready to copy & paste the following C# code into the Program.cs file. You will, of course, need to apply for your own ID and key (you can do so from here) and paste them into the clientId and clientKey constants.

using System;

using System.IO;

using System.Linq;

using System.Net.Http;

using System.Data.Services.Client;

using Microsoft.IdentityModel.Clients.ActiveDirectory;

 

namespace AutoCADIoSample

{

  class Program

  {

    const string clientId = "12345678-1234-1234-1234-123467890AB";

    const string clientKey =

      "s0meMad3upT3xt5upp053dT0R3pr353ntAVal1dk3y";

 

    static void Main(string[] args)

    {

      // Obtain token from active directory          

 

      var authCon =

        new AuthenticationContext(

          "https://login.windows.net/acesprodactdir.onmicrosoft.com"

        );

      var cred = new ClientCredential(clientId,clientKey);

      var token =

        authCon.AcquireToken("https://autocad.io/api/v1", cred).

          CreateAuthorizationHeader();

 

      // Instruct client side library to insert token as

      // Authorization value into each request

 

      var container =

        new AutoCADIo.Container(

          new Uri("http://autocad.io/api/v1/")

        );

      container.SendingRequest2 +=

        (s, e) => e.RequestMessage.SetHeader("Authorization", token);

 

      // Remove any existing instances of our activity

 

      var actsToDel =

        container.Activities.Where(a => a.Id == "CreateALine");

      foreach (var actToDel in actsToDel)

        container.DeleteObject(actToDel);

      container.SaveChanges();

 

      // Create our new activity which generates a DWG containing

      // a line and exports it to PDF

 

      var act =

        new AutoCADIo.Activity()

        {

          UserId = "",

          Id = "CreateALine",

          Version = 1,

          Instruction = new AutoCADIo.Instruction()

          {

            // The instruction is simply an AutoCAD Script

 

            Script =

              "_tilemode 1 _line 0,0 1,1  _tilemode 0 " +

              "_save result.dwg\n" +

              "_-export _pdf _all result.pdf\n"

          },

          Parameters = new AutoCADIo.Parameters()

          {

            InputParameters =

            {

              new AutoCADIo.Parameter()

              {

                Name = "HostDwg", LocalFileName = "$(HostDwg)"

              }

            },

            OutputParameters =

            {

              new AutoCADIo.Parameter()

              {

                Name = "DwgResult", LocalFileName = "result.dwg"

              },

              new AutoCADIo.Parameter()

              {

                Name = "PdfResult", LocalFileName = "result.pdf"

              }

            }

          },

          RequiredEngineVersion = "20.0"

        };

 

      // Add the activity to our container

 

      container.AddToActivities(act);

      container.SaveChanges();

 

      // List the available activities: should include CreateALine

 

      foreach (var a in container.Activities)

      {

        Console.WriteLine("-----------");

        Console.WriteLine("Activity Id: {0}", a.Id);

        Console.WriteLine("User Id: {0}", a.UserId);

        Console.WriteLine("Instruction: {0}", a.Instruction.Script);

        Console.WriteLine(

          "Command Line: {0}",

          !string.IsNullOrWhiteSpace(

            a.Instruction.CommandLineParameters

          ) ? a.Instruction.CommandLineParameters :

          "/i {hostdwg} /i {instructions.scr}");

        foreach (var p in a.Parameters.InputParameters)

          Console.WriteLine(

            "Input '{0}' will be named as '{1}' in working folder.",

            p.Name, p.LocalFileName

          );

        foreach (var p in a.Parameters.OutputParameters)

          Console.WriteLine(

            "Output '{0}' will cause file '{1}' to be uploaded " +

            "from working folder.", p.Name, p.LocalFileName

          );

      }

 

      // Create a workitem referencing our new activity

 

      var wi = new AutoCADIo.WorkItem()

      {

        UserId = "", // Must be set to empty

        Id = "", // Must be set to empty

        Arguments = new AutoCADIo.Arguments(),

        Version = 1, // Should always be 1

        ActivityId =

          new AutoCADIo.EntityId()

          {

            UserId = clientId, Id = "CreateALine"

          }

      };

 

      // Specify an input DWG, which will actually be a blank DWT

 

      wi.Arguments.InputArguments.Add(

        new AutoCADIo.Argument()

        {

          Name = "HostDwg", // Must match activity's input parameter

          Resource =

            "https://s3.amazonaws.com/" +

            "AutoCAD-Core-Engine-Services/TestDwg/acad.dwt",

          StorageProvider = "Generic" // Generic HTTP download

        }

      );

 

      // We'll post the DWG to a specified storage location

      // (using generic HTTP rather than storing to A360)

 

      wi.Arguments.OutputArguments.Add(

        new AutoCADIo.Argument()

        {

          Name = "DwgResult", // Must match activity's output param

          StorageProvider = "Generic", // Generic HTTP upload

          HttpVerb = "POST", // Use HTTP POST when delivering result

          Resource = null // Use storage provided by AutoCAD.io

        }

      );

 

      // We'll also post the PDF to a specified storage location

      // (using generic HTTP rather than storing to A360)

 

      wi.Arguments.OutputArguments.Add(

        new AutoCADIo.Argument()

        {

          Name = "PdfResult", // Must match activity's output param

          StorageProvider = "Generic", // Generic HTTP upload

          HttpVerb = "POST", // Use HTTP POST when delivering result

          Resource = null // Use storage provided by AutoCAD.io

        }

      );

 

      // Add the work item to our container

 

      container.AddToWorkItems(wi);

      container.SaveChanges();

 

      // Once saved, the work item should start executing...

      // We'll poll every 5 seconds to see if it's finished

 

      do

      {

        Console.WriteLine("Sleeping a bit...");

        System.Threading.Thread.Sleep(5000);

        container.LoadProperty(wi, "Status"); // Http request here

      }

      while (wi.Status == "Pending" || wi.Status == "InProgress");

 

      Console.WriteLine("\nRequest completed. Querying results...");

 

      // Re-query the service so that we can use the results

 

      container.MergeOption = MergeOption.OverwriteChanges;

      wi =

        container.WorkItems.Where(

          p => p.UserId == wi.UserId && p.Id == wi.Id

        ).First();

 

      // Resource property of the output argument "PdfResult"

      // will have the output url for the PDF

      // (for the DWG we'd do exactly the same for "DwgResult")

 

      var url =

        wi.Arguments.OutputArguments.First(

          a => a.Name == "PdfResult"

        ).Resource;

      if (url != null)

      {

        // Download the resultant PDF, store it locally

 

        var client = new HttpClient();

        var content =

          (StreamContent)client.GetAsync(url).Result.Content;

        var pdf = "z:\\Data\\line.pdf";

        if (File.Exists(pdf))

          File.Delete(pdf);

        using (var output = File.Create(pdf))

        {

          content.ReadAsStreamAsync().Result.CopyTo(output);

          output.Close();

        }

        Console.WriteLine("PDF downloaded to \"{0}\".", pdf);

      }

 

      url = wi.StatusDetails.Report;

      if (url != null)

      {

        // Download the report, store it locally

 

        var client = new HttpClient();

        var content =

          (StreamContent)client.GetAsync(url).Result.Content;

        var report = "z:\\Data\\AutoCADIoReport.txt";

        if (File.Exists(report))

          File.Delete(report);

        using (var output = File.Create(report))

        {

          content.ReadAsStreamAsync().Result.CopyTo(output);

          output.Close();

        }

        Console.WriteLine("Report downloaded to \"{0}\".", report);

      }

      // Wait for a key to be pressed

 

      Console.WriteLine("Press a key to continue...");

      Console.ReadKey();

    }

  }

}

A few words on what’s happening here.

After authenticating to use the service, we create a new Activity – think of this as being like a cloud-based “function” for us to call – which will create a DWG file and publish it to PDF.

To make use of this Activity, we need to create a WorkItem – which is like a function call providing the various arguments the function needs to operate.

Once the WorkItem has completed, we simply need to query its data via the service, as it should now have been populated by the AutoCAD I/O API with the various URLs to the output data. We can then query this data and save them to local files.

Here’s the console window output when we run this code:

AutoCADIoSample in action

Here’s the PDF:

Output PDF

And here are the contents of the report, to give you a sense for the kind of logging performed:

[10/03/2014 08:12:05] Starting work item 9c6f00ec93c1480dba00cd0974b84a46

[10/03/2014 08:12:05] Start download phase.

[10/03/2014 08:12:05] Start downloading file https://s3.amazonaws.com/AutoCAD-Core-Engine-Services/TestDwg/acad.dwt.

[10/03/2014 08:12:05] Bytes downloaded = 31419

[10/03/2014 08:12:05] https://s3.amazonaws.com/AutoCAD-Core-Engine-Services/TestDwg/acad.dwt downloaded as C:\Users\acesworker\AppData\LocalLow\jobs\9c6f00ec93c1480dba00cd0974b84a46\acad.dwt.

[10/03/2014 08:12:05] End download phase.

[10/03/2014 08:12:05] Start preparing script and command line parameters.

[10/03/2014 08:12:05] Start script content.

[10/03/2014 08:12:05] _tilemode 1 _line 0,0 1,1  _tilemode 0 _save result.dwg

_-export _pdf _all result.pdf

 

[10/03/2014 08:12:05] End script content.

[10/03/2014 08:12:05] Command line: /i "C:\Users\acesworker\AppData\LocalLow\jobs\9c6f00ec93c1480dba00cd0974b84a46\acad.dwt" /isolate job_9c6f00ec93c1480dba00cd0974b84a46 "C:\Users\acesworker\AppData\LocalLow\jobs\9c6f00ec93c1480dba00cd0974b84a46\userdata" /s "C:\Users\acesworker\AppData\LocalLow\jobs\9c6f00ec93c1480dba00cd0974b84a46\script.scr"

[10/03/2014 08:12:05] End preparing script and command line parameters.

[10/03/2014 08:12:05] Start script phase.

[10/03/2014 08:12:05] Start AutoCAD Core Console output.

[10/03/2014 08:12:05] Redirect stdout (file: C:\Users\ACESWO~1\AppData\Local\Temp\accc21082).

[10/03/2014 08:12:05] AutoCAD Core Engine Console - Copyright Autodesk, Inc 2009-2013.

[10/03/2014 08:12:05] Isolating to userId=job_9c6f00ec93c1480dba00cd0974b84a46, userDataFolder=C:\Users\acesworker\AppData\LocalLow\jobs\9c6f00ec93c1480dba00cd0974b84a46\userdata.

[10/03/2014 08:12:05] Regenerating model.

[10/03/2014 08:12:05] Command:

[10/03/2014 08:12:05] Command:

[10/03/2014 08:12:05] Command:

[10/03/2014 08:12:05] Command: _tilemode

[10/03/2014 08:12:05] Enter new value for TILEMODE <1>: 1

[10/03/2014 08:12:05] Command: _line

[10/03/2014 08:12:05] Specify first point: 0,0

[10/03/2014 08:12:05] Specify next point or [Undo]: 1,1

[10/03/2014 08:12:05] Specify next point or [Undo]:

[10/03/2014 08:12:05] Command: _tilemode

[10/03/2014 08:12:05] Enter new value for TILEMODE <1>: 0 Regenerating layout.

[10/03/2014 08:12:05] Regenerating model - caching viewports.

[10/03/2014 08:12:05] Command: _save Save drawing as <C:\Users\acesworker\AppData\LocalLow\jobs\9c6f00ec93c1480dba00cd0974b84a46\userdata\Local\template\acad.dwt>: result.dwg

[10/03/2014 08:12:06] Command: _-export Enter file format [Dwf/dwfX/Pdf] <dwfX>_pdf Enter plot area [Current layout/All layouts]<Current Layout>: _all

[10/03/2014 08:12:06] Enter file name <acad-Layout1.pdf>: result.pdf

[10/03/2014 08:12:06] Regenerating layout.

[10/03/2014 08:12:06] Regenerating model.

[10/03/2014 08:12:06] Command:

[10/03/2014 08:12:06] Command: Effective plotting area:  8.04 wide by 10.15 high

[10/03/2014 08:12:06] Effective plotting area:  6.40 wide by 8.40 high

[10/03/2014 08:12:06] Plotting viewport 2.

[10/03/2014 08:12:06] Plotting viewport 1.

[10/03/2014 08:12:06] Command: _quit

[10/03/2014 08:12:06] End AutoCAD Core Console output

[10/03/2014 08:12:06] End script phase.

[10/03/2014 08:12:06] Start upload phase.

[10/03/2014 08:12:06] Start uploading.

[10/03/2014 08:12:06] Target url: https://acesprod-bucket.s3-us-west-1.amazonaws.com/aces-workitem-outputs/9c6f00ec93c1480dba00cd0974b84a46/result.dwg?AWSAccessKeyId=ASIAIURT4LB4UT6AQUQQ&Expires=1412327526&x-amz-security-token=AQoDYXdzEHAa0ANVvX5bcflsH6HOUgkdeZaXsnR523sDP0j%2FwKSG%2B4fXEwLpAQF5oOXaq2s2gOIFFlbY0AeL7K%2BTx%2Bpnr2wyc5LVAgu5YrTZDt01BTS4YL5NYGPHJqZuYrFpX673UomYh1qdhK31l%2BJFzqk1L5NZofkQneY9FUPYQGxkEhGivI4ZCc%2FNqvd250Epc20DaWbAboE2kjLtEp5XkZRmfPR5StaerELbJNDk6ETlZBN4z%2FwSTxR5Yg1lhq%2BbIc27fDroU%2BLWJrkgbJUmQpXAqLDnmoVRR6RUopcWSM0sS8Mecq7iv%2BGhW%2F2udeMT8Ik9xfeVn19xRJ%2BVzww%2FkT6lY8v5AkwSVx3OGNAFPlAmFOPwWEzFrSTQXn9XU9hkE2TQY29wiLRTbL5EjOxV1anrYRnm7UjIOpY0h%2BdQjQO4fer3SAJZWx17Kk%2FF0iGT35n09pGElPqpiwcy%2FoCjNs432TGJXMLq1mOw5KqEUc7CkMF6pPbiJUc5109tsS4SALh%2B5cQhWP0pibYKns1vsxZioA9mEVClsezKsq%2BJRzjUkWbpVbEDz7fCy7ncY0yN0gWCTX5eIWwQdbzg%2BP%2Bv9au44OhJMPpiOu54IUCVNZnY2Du2kEkgayD4krmhBQ%3D%3D&Signature=4lUgaBWg6N8KeNUgpGfSl7Wcoy8%3D

[10/03/2014 08:12:06] End uploading.

[10/03/2014 08:12:06] Start uploading.

[10/03/2014 08:12:06] Target url: https://acesprod-bucket.s3-us-west-1.amazonaws.com/aces-workitem-outputs/9c6f00ec93c1480dba00cd0974b84a46/result.pdf?AWSAccessKeyId=ASIAIURT4LB4UT6AQUQQ&Expires=1412327527&x-amz-security-token=AQoDYXdzEHAa0ANVvX5bcflsH6HOUgkdeZaXsnR523sDP0j%2FwKSG%2B4fXEwLpAQF5oOXaq2s2gOIFFlbY0AeL7K%2BTx%2Bpnr2wyc5LVAgu5YrTZDt01BTS4YL5NYGPHJqZuYrFpX673UomYh1qdhK31l%2BJFzqk1L5NZofkQneY9FUPYQGxkEhGivI4ZCc%2FNqvd250Epc20DaWbAboE2kjLtEp5XkZRmfPR5StaerELbJNDk6ETlZBN4z%2FwSTxR5Yg1lhq%2BbIc27fDroU%2BLWJrkgbJUmQpXAqLDnmoVRR6RUopcWSM0sS8Mecq7iv%2BGhW%2F2udeMT8Ik9xfeVn19xRJ%2BVzww%2FkT6lY8v5AkwSVx3OGNAFPlAmFOPwWEzFrSTQXn9XU9hkE2TQY29wiLRTbL5EjOxV1anrYRnm7UjIOpY0h%2BdQjQO4fer3SAJZWx17Kk%2FF0iGT35n09pGElPqpiwcy%2FoCjNs432TGJXMLq1mOw5KqEUc7CkMF6pPbiJUc5109tsS4SALh%2B5cQhWP0pibYKns1vsxZioA9mEVClsezKsq%2BJRzjUkWbpVbEDz7fCy7ncY0yN0gWCTX5eIWwQdbzg%2BP%2Bv9au44OhJMPpiOu54IUCVNZnY2Du2kEkgayD4krmhBQ%3D%3D&Signature=leR9Gdzg6ggabjNHI6QEWBcPccQ%3D

[10/03/2014 08:12:06] End uploading.

[10/03/2014 08:12:06] End upload phase.

[10/03/2014 08:12:06] Job finished with result Succeeded

This service clearly has a lot of potential, especially for creating applications where you need some kind of DWG processing from an environment that isn’t suited to hosting AutoCAD (such as a mobile app or a web-based configurator that cranks out DWGs).

I would expect a modest cost to be associated with using the service, in due course, so don’t be surprised when that happens. But right now you can give it a try for free and consider how such a service might be used in your applications.

One area that I’ll show in a follow-up post is how to include custom application modules in your activities, so you can have custom commands included in the scripts you execute via the AutoCAD I/O API.

photo credit: dvanzuijlekom via photopincc

October 22, 2014

A dashboard... but what for?

Here’s a little bit of fun. There’s a new dashboard in the San Rafael office, but what does it show?

Dashboard

Post your guess as a comment: the closest – or most humorous, depending on my mood – will win a free “I♥3D” Autodesk T-shirt (I’ll contact the winner to get their size).

And when I say humorous that doesn’t mean rude or insulting. Please keep it clean & polite, people. :-)

October 20, 2014

VR Hackathon 2014 in SF

It’s been a busy few days. After being in full-day meetings on Thursday and Friday, I headed down with Jim Quanci to the VR Hackathon’s kick-off event on Friday night. It was held at the newly refurbished Gray Area Theater in San Francisco’s Mission district.

The Friday night “mega meetup” was a great way to kick the event off, with presentations from NASA’s JPL on how they teamed up with Sony to develop a prototype VR system to control robots for asteroid mining.

Asteroid mining

There was also an interesting presentation on the evolution of VR tech from Leap Motion’s founder and CTO, David Holz.

David Holz from Leap Motion

Jim and I set up a table – as Autodesk sponsored the event – and over the course of the weekend talked to various people about the Autodesk Viewing & Data API (and about our products and APIs, in general).

Jim talking 3D

To help demo the stereoscopic viewer – and to attract people to ask us to check it out – I had some fun putting together a version that auto-orbits and explodes the contents (once fully loaded). This version is best viewed in a browser, of course, as it doesn’t respond to device tilt.

Our DODOcase and PCs on the last day

I wasn’t sure whether I’d end up participating in a team, or not, but I ended up having so much fun hanging out with Jim and chatting to people that I stuck with that.

There were lots of fun things going on with the various teams…

Lots of medically oriented devices

A team working with Google Cardboard

There were even a few Autodeskers present. Lars Schneider – a member of the Infraworks team in Potsdam, Germany – formed a team with Torsten Becker, a friend of his who was also visiting SF. They’re pictured here with Michael Beale, who has worked on our web-based viewing technology and is currently on the rendering-as-a-service team.

Michael, Torsten and Lars

Aside from answering questions and giving demos, I also spent some time checking out the other sponsor’s technology. Sony’s asteroid mining tech was neat:

Kean in Morpheus

As was the combination of Leap Motion with Oculus Rift:Leap and Oculus

Leap Motion's Oculus Rift demo - keeping balls in the air

Aside from seeing Leap Motion with Oculus Rift, at least one team was using an alpha version of the Android SDK to provide input into a Google Cardboard-based game. Something I intend to do myself, once I manage to get a phone that supports the SDK.

Someone using Google Cardboard with Leap Motion via the Android SDK

Sunday afternoon was all about judging. A number of the more tethered solutions were judged by a roving panel of expert judges…

Judges judging

… but several others ended up being presented on the main stage. Here’s a section of a video of Lars & Torsten’s Oculus Rift + Leap Motion app that I’m very happy to say ended up winning the WebVR category.

Lars and Torsten's Hackathon demo

(Lars tells me they’ll be posting the code soon – I’ll be sure to link to it here.)

Way to go, guys – makes me feel good to see a fellow Autodesker doing so well at this event. :-)

Overall it was a great weekend. There were some really cool projects – such as a Leap Motion-based hand tremor detector, a procedurally-generated game world (which reminded me of Elite) and a CAD-like tool that allows you to tweak the design of a lamp shade by tweaking the position of shadows on the wall. Awesome stuff.

Many thanks to Damon Hernandez and members of the Web 3D consortium for all the hard work. I hope I’ll be able to make it across to the next event!

October 16, 2014

Creating a stereoscopic viewer for Google Cardboard using the Autodesk 360 viewer – Part 3

After introducing the topic, showing a basic stereoscopic viewer using the Autodesk 360 viewer and then adding full-screen and device-tilt navigation, today we’re going to extend our UI to allow viewing of multiple models.

Firstly it’s worth pointing out that for models to be accessible by the viewer that makes use of my client credentials, I also need to upload that content with the same credentials. You can follow the procedure in this previous post to see how you do that, although I believe the ADN team has created some samples that help simplify the process, too.

Once you have the Base64 document IDs for your various models, it’s pretty simple to abstract the code to work on an arbitrary model. The main caveat is that there may be custom behaviours you want for particular models. For instance there are models for which the up direction is the Z-axis rather than the Y-axis (mainly because the translation process isn’t perfect or at least wasn’t when the model was processed)  or for which you may want to save a custom view.

We take care of this in the below code by providing a couple of optional arguments to our launchViewer() function that can be used to specify an up direction and an initial zoom for particular models.

And that’s pretty much all this version of the code does beyond yesterday’s. Here’s the main modified section – you can, of course, just take a look at the complete file.

var viewerLeft, viewerRight;

var updatingLeft = false, updatingRight = false;

var leftLoaded = false, rightLoaded = false, cleanedModel = false;

var leftPos, baseDir, upVector;

var initZoom;

 

function Commands() { }

 

Commands.morgan = function () {

  launchViewer(

    'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL1NwTTNXNy5mM2Q=',

    new THREE.Vector3(0, 0, 1),

    function () {

      zoom(

        viewerLeft,

        -48722.5, -54872, 44704.8,

        10467.3, 1751.8, 1462.8

      );

    }

  );

};

 

Commands.robot_arm = function () {

  launchViewer(

    'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL1JvYm90QXJtLmR3Zng='   

  );

};

 

Commands.chassis = function () {

  launchViewer(

    'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL0NoYXNzaXMuZjNk'

  );

};

 

Commands.front_loader = function () {

  launchViewer(

    'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL0Zyb250JTIwTG9hZGVyLmR3Zng=',

    new THREE.Vector3(0, 0, 1)

  );

};

 

Commands.suspension = function () {

  launchViewer(

    'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL1N1c3BlbnNpb24uaXB0'

  );

};

 

Commands.V8_engine = function () {

  launchViewer(

    'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL1Y4RW5naW5lLnN0cA=='

  );

};

 

function initialize() {

 

  // Populate our initial UI with a set of buttons, one for each

  // function in the Commands object

 

  var panel = document.getElementById('control');

  for (var fn in Commands) {

    var button = document.createElement('div');

    button.classList.add('cmd-btn');

 

    // Replace any underscores with spaces before setting the

    // visible name

 

    button.innerHTML = fn.replace('_', ' ');

    button.onclick = (function (fn) {

      return function () { fn(); };

    })(Commands[fn]);

 

    // Add the button with a space under it

 

    panel.appendChild(button);

    panel.appendChild(document.createTextNode('\u00a0'));

  }

}

 

function launchViewer(docId, upVec, zoomFunc) {

 

  // Assume the default "world up vector" of the Y-axis

  // (only atypical models such as Morgan and Front Loader require

  // the Z-axis to be set as up)

 

  upVec =

    typeof upVec !== 'undefined' ?

      upVec :

      new THREE.Vector3(0, 1, 0);

 

  // Ask for the page to be fullscreen

  // (can only happen in a function called from a

  // button-click handler or some other UI event)

 

  requestFullscreen();

 

  // Hide the controls that brought us here

 

  var controls = document.getElementById('control');

  controls.style.visibility = 'hidden';

 

  // Bring the layer with the viewers to the front

  // (important so they also receive any UI events)

 

  var layer1 = document.getElementById('layer1');

  var layer2 = document.getElementById('layer2');

  layer1.style.zIndex = 1;

  layer2.style.zIndex = 2;

 

  // Store the up vector in a global for later use

 

  upVector = new THREE.Vector3().copy(upVec);

 

  // The same for the optional Initial Zoom function

 

  if (zoomFunc)

    initZoom = zoomFunc;

 

  // Get our access token from the internal web-service API

 

  $.get('http://' + window.location.host + '/api/token',

    function (accessToken) {

 

      // Specify our options, including the provided document ID

 

      var options = {};

      options.env = 'AutodeskProduction';

      options.accessToken = accessToken;

      options.document = docId;

 

      // Create and initialize our two 3D viewers

 

      var elem = document.getElementById('viewLeft');

      viewerLeft = new Autodesk.Viewing.Viewer3D(elem, {});

 

      Autodesk.Viewing.Initializer(options, function () {

        viewerLeft.initialize();

        loadDocument(viewerLeft, options.document);

      });

 

      elem = document.getElementById('viewRight');

      viewerRight = new Autodesk.Viewing.Viewer3D(elem, {});

 

      Autodesk.Viewing.Initializer(options, function () {

        viewerRight.initialize();

        loadDocument(viewerRight, options.document);

      });

    }

  );

}

When you launch the HTML page it looks a bit different from last time, but only in the fact there’s now a choice of models to select from.

Here’s a slightly faked view of the UI on a mobile device (I’ve combined two screenshots to get the full UI on one screen):

The choice of models

We’ve seen plenty of the Morgan model, but here’s a quick taste of the others. There isn’t currently a back button in the UI, so you’ll have to reload the page to switch between models.

Robot Arm

Front Loader

Suspension

V8 Engine

I haven’t included the “Chassis” model, here: for some reason this looks great on my PC but is all black on my Android device. I’m not sure why, but I’ve nonetheless left it in the model list, for now.

I’ve now arrived in San Francisco and have been finally able to test with DODOcase’s Google Cardboard viewer. And it looks really good! I was expecting to have to tweak the camera offset, but that seems to be fine. I was also concerned I’d need to put a spherical warp on each viewer to compensate for lens distortion, but honestly that seems unnecessary, too. Probably because we’re dealing with a central object view rather than walking through a scene.

I have to admit to finding the experience quite compelling. If you’re coming to AU or to the upcoming DevDays tour then you’ll be able to see for yourself there. Assuming you don’t want to buy or build your own and try it in the meantime, of course. :-)

October 15, 2014

Creating a stereoscopic viewer for Google Cardboard using the Autodesk 360 viewer – Part 2

I’m heading out the door in a few minutes to take the train to Zurich and a (thankfully direct) flight from there to San Francisco. I’ll have time on the flight to write up the next part in the series, so all will be in place for this weekend’s VR Hackathon.

In today’s post we’re going to extend the implementation we saw yesterday (and introduced on Monday) by adding full-screen viewing and device-tilt navigation.

Full-screen mode is easy: I borrowed some code from here that works well, the only thing to keep in mind is that the API can only be called in a UI event handler (such as when someone has pressed a button). This is clearly intended to stop naughty pages from forcing you into full-screen mode on load. So we’re adding a single, huge “Start” button to launch the viewer. Nothing particularly interesting, although we do hide – and change the Z-order on – some divs to make an apparently multi-page UI happen via a single HTML file. We’ll extend this approach in tomorrow’s post to show more buttons, one for each hosted model.

Device-tilt support is only a little more involved: the window has a ‘deviceorientation’ event we can listen to that gives us alpha/beta/gamma values representing data coming from the host device’s sensors (presumably the accelerometer and magnetometer). These appear to be given irrespective of the actual orientation (meaning whether it’s in portrait or landscape mode). We’re only interested in landscape mode, so we need to look at the alpha value for the horizontal (left-right) rotation and gamma for the vertical (front-back) rotation. The vertical rotation can be absolute, but we want to fix the left-right rotation based on an initial direction – horizontal rotations after that should be relative to that initial direction.

The HTML page hasn’t changed substantially – it has some additional styles, but that’s about it.

Here are the relevant additions to the referenced JavaScript file (I’ve omitted the UI changes and the event handler subscription – you can get the full source here).

function orb(e) {

 

  if (e.alpha && e.gamma) {

 

    // Remove our handlers watching for camera updates,

    // as we'll make any changes manually

    // (we won't actually bother adding them back, afterwards,

    // as this means we're in mobile mode and probably inside

    // a Google Cardboard holder)

 

    unwatchCameras();

 

    // Our base direction allows us to make relative horizontal

    // rotations when we rotate left & right

 

    if (!baseDir)

      baseDir = e.alpha;

 

    if (viewerLeft.running && viewerRight.running) {

 

      var deg2rad = Math.PI / 180;

 

      // gamma is the front-to-back in degrees (with

      // this screen orientation) with +90/-90 being

      // vertical and negative numbers being 'downwards'

      // with positive being 'upwards'

 

      var vert = (e.gamma + (e.gamma <= 0 ? 90 : -90)) * deg2rad;

 

      // alpha is the compass direction the device is

      // facing in degrees. This equates to the

      // left - right rotation in landscape

      // orientation (with 0-360 degrees)

 

      var horiz = (e.alpha - baseDir) * deg2rad;

 

      orbitViews(vert, horiz);

    }

  }

}

 

function orbitViews(vert, horiz) {

 

  // We'll rotate our position based on the initial position

  // and the target will stay the same

 

  var pos = new THREE.Vector3().copy(leftPos);

  var trg = viewerLeft.navigation.getTarget();

 

  // Start by applying the left/right orbit

  // (we need to check the up/down value, though)

 

  var zAxis = new THREE.Vector3(0, 0, 1);

  pos.applyAxisAngle(zAxis, (vert < 0 ? horiz + Math.PI : horiz));

 

  // Now add the up/down rotation

 

  var axis = new THREE.Vector3().subVectors(pos, trg).normalize();

  axis.cross(zAxis);

  pos.applyAxisAngle(axis, vert);

 

  // Zoom in with the lefthand view

 

  zoom(viewerLeft, pos.x, pos.y, pos.z, trg.x, trg.y, trg.z);

 

  // Get a camera slightly to the right

 

  var pos2 = offsetCameraPos(viewerLeft, pos, trg, true);

 

  // And zoom in with that on the righthand view, too

 

  var up = viewerLeft.navigation.getCameraUpVector();

 

  zoom(

    viewerRight,

    pos2.x, pos2.y, pos2.z,

    trg.x, trg.y, trg.z,

    up.x, up.y, up.z

  );

}

So how can we test this? Obviously with a physical device – and I recommend using Chrome on an Android device for best results – or you can choose to use Google Chrome Canary on your PC (whether Mac or Windows). Canary is the codename for the next version of Chrome that’s currently in Beta: I don’t actually know whether the next release is always called Canary, or whether this changes. As you can probably tell, this is the first time I’ve installed it. :-)

Canary currently includes some very helpful developer tools that go beyond what’s in the current stable release of Chrome (which at the time of writing is version 38.0.2125.101 for me, at least). The version of Chrome Canary I have installed is version 40.0.2185.0.

Here’s the main page loaded in Chrome Canary with the enhanced developer tools showing:

Our page in Chrome Canary

The important part is the bottom-right pane which includes sensor emulation information. For more information on enabling this (which you do via the blue “mobile device” icon at the top, next to the search icon) check the online Chrome developer docs.

You can either enter absolute values – which is in itself very handy – or grab onto the device and wiggle it around (which helps emulate more realistic device usage, I expect).Canary device-tilt

Again, here’s the page for you to try yourself.

In tomorrow’s post we’ll extend this implementation to look at other models, refactoring some of the UI and viewer control code in the process.

October 14, 2014

Creating a stereoscopic viewer for Google Cardboard using the Autodesk 360 viewer – Part 1

After yesterday’s introduction to this series of posts, today we’re going to dive into some specifics, implementing a basic, web-based, stereoscopic viewer.

While this series of posts is really about using Google Cardboard to view Autodesk 360 models in 3D (an interesting topic, I hope you’ll agree ;-), it’s also about how easily you can use the Autodesk 360 viewer to power Google Cardboard: we’ll see it’s a straightforward way to get 3D content into a visualization system that’s really all about 3D.

Let’s start with some basics. We clearly need two views in our web-page, one for each eye. For now we’re not going to worry about making the page full-screen – which basically means hiding the address bar – as we’ll address that when we integrate device-tilt navigation tomorrow. But the web-page will fill the screen estate that we have, of course.

Our basic stereoscopic 3D viewer

The Autodesk 360 viewer doesn’t currently support multiple viewports on a single scene – even if this is a capability that Three.js provides – so for now we’re going to embed two separate instances of the Autodesk 360 viewer. At some point the viewer will hopefully provide viewporting capability – and allow us to reduce the app’s network usage and memory footprint – but we’ll see over the coming posts that even with two separate viewer instances the app performs well.

In this post and the next we’re going to make use of the Morgan model that we saw “steampunked” using Fusion 360 and then integrated into my first Autodesk 360 application. Basically because it’s the model that’s content that can already be accessed by this particular site. On Thursday we’ll extend that to be able to choose from a selection of models.

The lighting used for this model is different from in the previous sample: “simple grey” works better on mobile devices that “riverbank”, it seems (which has much more going on in terms of lights and environment backgrounds, etc.).

I’m looking at this viewer as an “object viewer”, which allows us to spin the camera around a fixed point of interest and view it from different angles, rather than a “walk-/fly-through viewer”. This is a choice, of course: you could easily take the foundation shown in this series and make a viewer that’s better-suited for viewing an architectural model from the inside, for instance.

OK, before we go much further, I should probably add this caveat: I don’t actually yet have a Google Cardboard device in my possession. I have a Nexus 4 phone – which has Android 4.4.4 and can run the native Google Cardboard app as well as host WebGL for a web-based viewer implementation – but I don’t actually have the lenses, etc. I have a DODOcase VR Cardboard Toolkit waiting for me in San Francisco, but until now I haven’t tested to see whether the stereoscopic effect works or not. I’ve squinted at the screen from close up, of course, but haven’t yet seen anything jump out in 3D. That said, Jim Quanci assures me it looks great with the proper case, so I’m fairly sure I’m not wasting everyone’s time with these posts.

The main “known unknown” until I test firsthand has been the distance to be used between the two camera positions. Three.js allows us to translate a camera in the X direction (relative to its viewing direction along Z, which basically means pan left or right) very easily, but I’ve had to guess a little with the distance. For now I’ve taken 4% of the distance between the camera and the target – as this gives a very slight difference between the views for various models I tried – but this value may need some tweaking.

Beyond working out the camera positions of the two views, the main work is about keeping them in sync: if the lefthand view changes then the righthand view should adjust to keep the stereo effect and vice-versa. In my first implementation I used a number of HTML5 events to do this: click, mouseup, mousemove, touchstart, touchend, touchcancel, touchleave & touchmove. And then I realised that there was no simple way to hook into zoom, which drove me crazy for a while. Argh. But then I realised I could hook into the viewer’s cameraChanged event, instead, which was much better (although this gets called for any change in the viewer, and you also need to make sure you don’t get into some circular modifications, leading to your model disappearing into the distance… :-).

Here’s an animated GIF of the views being synchronised successfully between the two embedded viewers inside a desktop browser:

Stereo Morgan

Now for some code… here’s the HTML page (which I’ve named stereo-basic.html) for the simple, stereoscopic viewer. I’ve embedded the styles but have kept the JavaScript in a separate file for easier debugging.

<!DOCTYPE html>

<html>

  <head>

    <meta charset="utf-8">

    <title>Basic Stereoscopic Viewer</title>

    <link rel="shortcut icon" type="image/x-icon" href="/favicon.ico?v=2">

    <meta

      name="viewport"

      content=

        "width=device-width, minimum-scale=1.0, maximum-scale=1.0" />

    <meta charset="utf-8">

    <link

      rel="stylesheet"

      href="https://developer.api.autodesk.com/viewingservice/v1/viewers/style.css"

      type="text/css">

    <script

      src=

        "https://developer.api.autodesk.com/viewingservice/v1/viewers/viewer3D.min.js">

    </script>

    <script src="js/jquery.js"></script>

    <script src="js/stereo-basic.js"></script>

    <style>

      body {

        margin: 0px;

        overflow: hidden;

      }

    </style>

  </head>

  <body onload="initialize();" oncontextmenu="return false;">

    <table width="100%" height="100%">

      <tr>

        <td width="50%">

          <div id="viewLeft" style="width:50%; height:100%;"></div>

        </td>

        <td width="50%">

          <div id="viewRight" style="width:50%; height:100%;"></div>

        </td>

      </tr>

    </table>

  </body>

</html>

And here’s the referenced JavaScript file:

var viewerLeft, viewerRight;

var updatingLeft = false, updatingRight = false;

var leftLoaded = false, rightLoaded = false, cleanedModel = false;

 

function initialize() {

 

  // Get our access token from the internal web-service API

 

  $.get('http://' + window.location.host + '/api/token',

    function (accessToken) {

 

      // Specify our options, including the document ID

 

      var options = {};

      options.env = 'AutodeskProduction';

      options.accessToken = accessToken;

      options.document =

       'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL1NwTTNXNy5mM2Q=';

 

      // Create and initialize our two 3D viewers

 

      var elem = document.getElementById('viewLeft');

      viewerLeft = new Autodesk.Viewing.Viewer3D(elem, {});

 

      Autodesk.Viewing.Initializer(options, function () {

        viewerLeft.initialize();

        loadDocument(viewerLeft, options.document);

      });

 

      elem = document.getElementById('viewRight');

      viewerRight = new Autodesk.Viewing.Viewer3D(elem, {});

 

      Autodesk.Viewing.Initializer(options, function () {

        viewerRight.initialize();

        loadDocument(viewerRight, options.document);

      });

    }

  );

}

 

function loadDocument(viewer, docId) {

 

  // The viewer defaults to the full width of the container,

  // so we need to set that to 50% to get side-by-side

 

  viewer.container.style.width = '50%';

  viewer.resize();

 

  // Let's zoom in and out of the pivot - the screen

  // real estate is fairly limited - and reverse the

  // zoom direction

 

  viewer.navigation.setZoomTowardsPivot(true);

  viewer.navigation.setReverseZoomDirection(true);

 

  if (docId.substring(0, 4) !== 'urn:')

    docId = 'urn:' + docId;

 

  Autodesk.Viewing.Document.load(docId,

    function (document) {

 

      // Boilerplate code to load the contents

 

      var geometryItems = [];

 

      if (geometryItems.length == 0) {

        geometryItems =

          Autodesk.Viewing.Document.getSubItemsWithProperties(

            document.getRootItem(),

            { 'type': 'geometry', 'role': '3d' },

            true

          );

      }

      if (geometryItems.length > 0) {

        viewer.load(document.getViewablePath(geometryItems[0]));

      }

 

      // Add our custom progress listener and set the loaded

      // flags to false

 

      viewer.addEventListener('progress', progressListener);

      leftLoaded = rightLoaded = false;

    },

    function (errorMsg, httpErrorCode) {

      var container = document.getElementById('viewerLeft');

      if (container) {

        alert('Load error ' + errorMsg);

      }

    }

  );

}

 

// Progress listener to set the view once the data has started

// loading properly (we get a 5% notification early on that we

// need to ignore - it comes too soon)

 

function progressListener(e) {

 

  // If we haven't cleaned this model's materials and set the view

  // and both viewers are sufficiently ready, then go ahead

 

  if (!cleanedModel &&

    ((e.percent > 0.1 && e.percent < 5) || e.percent > 5)) {

 

    if (e.target.clientContainer.id === 'viewLeft')

      leftLoaded = true;

    else if (e.target.clientContainer.id === 'viewRight')

      rightLoaded = true;

 

    if (leftLoaded && rightLoaded && !cleanedModel) {

 

      // Iterate the materials to change any red ones to grey

 

      for (var p in viewerLeft.impl.matman().materials) {

        var m = viewerLeft.impl.matman().materials[p];

        if (m.color.r >= 0.5 && m.color.g == 0 && m.color.b == 0) {

          m.color.r = m.color.g = m.color.b = 0.5;

          m.needsUpdate = true;

        }

      }

      for (var p in viewerRight.impl.matman().materials) {

        var m = viewerRight.impl.matman().materials[p];

        if (m.color.r >= 0.5 && m.color.g == 0 && m.color.b == 0) {

          m.color.r = m.color.g = m.color.b = 0.5;

          m.needsUpdate = true;

        }

      }

 

      // Zoom to the overall view initially

 

      zoomEntirety(viewerLeft);

      setTimeout(function () { transferCameras(true); }, 0);

 

      cleanedModel = true;

    }

  }

  else if (cleanedModel && e.percent > 10) {

 

    // If we have already cleaned and are even further loaded,

    // remove the progress listeners from the two viewers and

    // watch the cameras for updates

 

    unwatchProgress();

 

    watchCameras();

  }

}

 

// Add and remove the pre-viewer event handlers

 

function watchCameras() {

  viewerLeft.addEventListener('cameraChanged', left2right);

  viewerRight.addEventListener('cameraChanged', right2left);

}

 

function unwatchCameras() {

  viewerLeft.removeEventListener('cameraChanged', left2right);

  viewerRight.removeEventListener('cameraChanged', right2left);

}

 

function unwatchProgress() {

  viewerLeft.removeEventListener('progress', progressListener);

  viewerRight.removeEventListener('progress', progressListener);

}

 

// Event handlers for the cameraChanged events

 

function left2right() {

  if (!updatingRight) {

    updatingLeft = true;

    transferCameras(true);

    setTimeout(function () { updatingLeft = false; }, 500);

  }

}

 

function right2left() {

  if (!updatingLeft) {

    updatingRight = true;

    transferCameras(false);

    setTimeout(function () { updatingRight = false; }, 500);

  }

}

 

function transferCameras(leftToRight) {

 

  // The direction argument dictates the source and target

 

  var source = leftToRight ? viewerLeft : viewerRight;

  var target = leftToRight ? viewerRight : viewerLeft;

 

  var pos = source.navigation.getPosition();

  var trg = source.navigation.getTarget();

 

  // Set the up vector manually for both cameras

 

  var upVector = new THREE.Vector3(0, 0, 1);

  source.navigation.setWorldUpVector(upVector);

  target.navigation.setWorldUpVector(upVector);

 

  // Get the new position for the target camera

 

  var up = source.navigation.getCameraUpVector();

 

  // Get the position of the target camera

 

  var newPos = offsetCameraPos(source, pos, trg, leftToRight);

 

  // Save the left-hand camera position: device tilt orbits

  // will be relative to this point

 

  leftPos = leftToRight ? pos : newPos;

 

  // Zoom to the new camera position in the target

 

  zoom(

    target, newPos.x, newPos.y, newPos.z, trg.x, trg.y, trg.z,

    up.x, up.y, up.z

  );

}

 

function offsetCameraPos(source, pos, trg, leftToRight) {

 

  // Get the distance from the camera to the target

 

  var xd = pos.x - trg.x;

  var yd = pos.y - trg.y;

  var zd = pos.z - trg.z;

  var dist = Math.sqrt(xd * xd + yd * yd + zd * zd);

 

  // Use a small fraction of this distance for the camera offset

 

  var disp = dist * 0.04;

 

  // Clone the camera and return its X translated position

 

  var clone = source.autocamCamera.clone();

  clone.translateX(leftToRight ? disp : -disp);

  return clone.position;

}

 

// Model-specific helper to zoom into a specific part of the model

 

function zoomEntirety(viewer) {

  zoom(viewer, -48722.5, -54872, 44704.8, 10467.3, 1751.8, 1462.8);

}

 

// Set the camera based on a position, target and optional up vector

 

function zoom(viewer, px, py, pz, tx, ty, tz, ux, uy, uz) {

 

  // Make sure our up vector is correct for this model

 

  var upVector = new THREE.Vector3(0, 0, 1);

  viewer.navigation.setWorldUpVector(upVector, true);

 

  var up =

    (ux && uy && uz) ? new THREE.Vector3(ux, uy, uz) : upVector;

 

  viewer.navigation.setView(

    new THREE.Vector3(px, py, pz),

    new THREE.Vector3(tx, ty, tz)

  );

  viewer.navigation.setCameraUpVector(up);

}

To host something similar yourself, I recommend starting with the post I linked to earlier and building it up from there (you basically need to provide the ‘/api/token’ server API – using your own client credentials – for this to work).

But you don’t need to build it yourself – or even have an Android device – to give this a try. Simply load the HTML page in your preferred WebGL-capable browser (Chrome is probably safest, considering that’s what I’ve been using when developing this) and have a play.

On a PC it will respond to mouse or touch navigation, of course, but in tomorrow’s post we’ll implement a much more interesting – at least with respect to Google Cardboard, where you can’t get your fingers near the screen to navigate – tilt-based navigation mechanism. We’ll also take a look at how we can use Google Chrome Canary to emulate device-tilt on a PC, reducing the need to jump through the various hoops needed to debug remotely. Interesting stuff. :-)

October 13, 2014

Gearing up for the VR Hackathon

I’m heading back across to the Bay Area on Wednesday for 10 days. There seems to be a pattern forming to my trips across: I’ll spend the first few days in San Francisco – in this case attending internal strategy meetings in our 1 Market office – and then head up after the weekend to San Rafael to work with the members of the AutoCAD engineering team based up there. I’ll still probably head back into SF for the odd day, the following week, but that’s fine: I really like commuting by ferry from Larkspur to the Embarcadero.

The weekend I’m spending in the Bay Area is looking to have a slightly different shape this time, though. Rather than just catching up with old friends (which I still hope to do), I’ve signed up for the VR Hackathon, an event that looks really interesting. I was happy to find out about this one and that it fell exactly during my stay. I’ve even roped a few colleagues into coming along, too.

VR Hackathon

Looking at the “challenges” posted for the hackathon, it seemed worth taking a look at web and mobile VR, as these look like the two that I’m most likely to be able to contribute towards. Which led me to reaching out to Jim Quanci and Cyrille Fauvel, over in the ADN team, to see what’s been happening with respect to VR platforms such as Oculus Rift and Google Cardboard.

It turns out the ADN team has invested in a few Oculus Rift Developer Kits, but was looking for someone to spend some time fooling around with integrating the new WebGL-based Autodesk 360 viewer with Google Cardboard. And as “fooling around” is my middle name, I signed up enthusiastically. :-)

For those of you who haven’t been following the VR space, lately, I think it’s fair to say that Facebook put the cat amongst the pigeons when they acquired Oculus. Google’s competitive response was very interesting: at this year’s Google I/O they announced Google Cardboard, a simple View-Master-like mount for a smartphone that can be used for AR or VR.

Cardboard

A few notes about the design: there are two lenses that focus the smartphone’s display – which is split in half in landscape mode, with one half for each eye – and there’s a simple magnet-based button on the left as well as an embedded NFC tag to tell the phone when to launch the Cardboard software. The rear camera has also been left clear in case you need its input for a “reality feed” in the case of AR or perhaps some additional information to help with VR.

Aside from the smartphone, the whole package can be made for a few dollars (assuming a certain economy of scale, of course) with the provided instructions. Right now you can pick them up pre-assembled for anywhere between $15 and $30 – still cheap for the capabilities provided. Which has led to the somewhat inevitable nickname of “Oculus Thrift”. :-)

The point Google is making, of course, is that you don’t need expensive, complex kit to do VR: today’s smartphones have a lot of the capabilities needed, in terms of processing power, sensors and responsive, high-resolution displays.

When looking into the possibilities for supporting Cardboard from a software perspective, there seem to be two main options: the first is to create a native Android app using their SDK, the second is to create a web-app such as those available on the Chrome Experiments site.

Given the web-based nature of the Autodesk 360 viewer, it seemed to make sense to follow the latter path. Jim and Cyrille kindly pointed me at an existing integration of Cardboard with Three.js/WebGL, which turned out to be really useful. But we’ll look at some specifics more closely in the next post.

During the rest of the week – and I expect to post each day until Thursday, at least, so check back often – I’ll cover the following topics:

  1. Creating a stereoscopic viewer for Google Cardboard using the Autodesk 360 viewer
  2. Adding tilt support for model navigation and enabling fullscreen mode
  3. Supporting multiple models

If I manage to get my hands on the pre-release Leap Motion SDK for Android then I’ll try to integrate that, too, at some point. Mounting a Leap Motion controller to the back of the goggles allows you to use hand gestures for additional (valuable) input in a VR environment… I’m thinking this may end up being the “killer app” for Leap Motion (not mine specifically, but VR in general).

Until tomorrow!

Feed/Share

10 Random Posts