October 2014

Sun Mon Tue Wed Thu Fri Sat
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31  










October 20, 2014

VR Hackathon 2014 in SF

It’s been a busy few days. After being in full-day meetings on Thursday and Friday, I headed down with Jim Quanci to the VR Hackathon’s kick-off event on Friday night. It was held at the newly refurbished Gray Area Theater in San Francisco’s Mission district.

The Friday night “mega meetup” was a great way to kick the event off, with presentations from NASA’s JPL on how they teamed up with Sony to develop a prototype VR system to control robots for asteroid mining.

Asteroid mining

There was also an interesting presentation on the evolution of VR tech from Leap Motion’s founder and CTO, David Holz.

David Holz from Leap Motion

Jim and I set up a table – as Autodesk sponsored the event – and over the course of the weekend talked to various people about the Autodesk Viewing & Data API (and about our products and APIs, in general).

Jim talking 3D

To help demo the stereoscopic viewer – and to attract people to ask us to check it out – I had some fun putting together a version that auto-orbits and explodes the contents (once fully loaded). This version is best viewed in a browser, of course, as it doesn’t respond to device tilt.

Our DODOcase and PCs on the last day

I wasn’t sure whether I’d end up participating in a team, or not, but I ended up having so much fun hanging out with Jim and chatting to people that I stuck with that.

There were lots of fun things going on with the various teams…

Lots of medically oriented devices

A team working with Google Cardboard

There were even a few Autodeskers present. Lars Schneider – a member of the Infraworks team in Potsdam, Germany – formed a team with Torsten Becker, a friend of his who was also visiting SF. They’re pictured here with Michael Beale, who has worked on our web-based viewing technology and is currently on the rendering-as-a-service team.

Michael, Torsten and Lars

Aside from answering questions and giving demos, I also spent some time checking out the other sponsor’s technology. Sony’s asteroid mining tech was neat:

Kean in Morpheus

As was the combination of Leap Motion with Oculus Rift:Leap and Oculus

Leap Motion's Oculus Rift demo - keeping balls in the air

Aside from seeing Leap Motion with Oculus Rift, at least one team was using an alpha version of the Android SDK to provide input into a Google Cardboard-based game. Something I intend to do myself, once I manage to get a phone that supports the SDK.

Someone using Google Cardboard with Leap Motion via the Android SDK

Sunday afternoon was all about judging. A number of the more tethered solutions were judged by a roving panel of expert judges…

Judges judging

… but several others ended up being presented on the main stage. Here’s a section of a video of Lars & Torsten’s Oculus Rift + Leap Motion app that I’m very happy to say ended up winning the WebVR category.

Lars and Torsten's Hackathon demo

(Lars tells me they’ll be posting the code soon – I’ll be sure to link to it here.)

Way to go, guys – makes me feel good to see a fellow Autodesker doing so well at this event. :-)

Overall it was a great weekend. There were some really cool projects – such as a Leap Motion-based hand tremor detector, a procedurally-generated game world (which reminded me of Elite) and a CAD-like tool that allows you to tweak the design of a lamp shade by tweaking the position of shadows on the wall. Awesome stuff.

Many thanks to Damon Hernandez and members of the Web 3D consortium for all the hard work. I hope I’ll be able to make it across to the next event!

October 16, 2014

Creating a stereoscopic viewer for Google Cardboard using the Autodesk 360 viewer – Part 3

After introducing the topic, showing a basic stereoscopic viewer using the Autodesk 360 viewer and then adding full-screen and device-tilt navigation, today we’re going to extend our UI to allow viewing of multiple models.

Firstly it’s worth pointing out that for models to be accessible by the viewer that makes use of my client credentials, I also need to upload that content with the same credentials. You can follow the procedure in this previous post to see how you do that, although I believe the ADN team has created some samples that help simplify the process, too.

Once you have the Base64 document IDs for your various models, it’s pretty simple to abstract the code to work on an arbitrary model. The main caveat is that there may be custom behaviours you want for particular models. For instance there are models for which the up direction is the Z-axis rather than the Y-axis (mainly because the translation process isn’t perfect or at least wasn’t when the model was processed)  or for which you may want to save a custom view.

We take care of this in the below code by providing a couple of optional arguments to our launchViewer() function that can be used to specify an up direction and an initial zoom for particular models.

And that’s pretty much all this version of the code does beyond yesterday’s. Here’s the main modified section – you can, of course, just take a look at the complete file.

var viewerLeft, viewerRight;

var updatingLeft = false, updatingRight = false;

var leftLoaded = false, rightLoaded = false, cleanedModel = false;

var leftPos, baseDir, upVector;

var initZoom;

 

function Commands() { }

 

Commands.morgan = function () {

  launchViewer(

    'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL1NwTTNXNy5mM2Q=',

    new THREE.Vector3(0, 0, 1),

    function () {

      zoom(

        viewerLeft,

        -48722.5, -54872, 44704.8,

        10467.3, 1751.8, 1462.8

      );

    }

  );

};

 

Commands.robot_arm = function () {

  launchViewer(

    'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL1JvYm90QXJtLmR3Zng='   

  );

};

 

Commands.chassis = function () {

  launchViewer(

    'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL0NoYXNzaXMuZjNk'

  );

};

 

Commands.front_loader = function () {

  launchViewer(

    'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL0Zyb250JTIwTG9hZGVyLmR3Zng=',

    new THREE.Vector3(0, 0, 1)

  );

};

 

Commands.suspension = function () {

  launchViewer(

    'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL1N1c3BlbnNpb24uaXB0'

  );

};

 

Commands.V8_engine = function () {

  launchViewer(

    'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL1Y4RW5naW5lLnN0cA=='

  );

};

 

function initialize() {

 

  // Populate our initial UI with a set of buttons, one for each

  // function in the Commands object

 

  var panel = document.getElementById('control');

  for (var fn in Commands) {

    var button = document.createElement('div');

    button.classList.add('cmd-btn');

 

    // Replace any underscores with spaces before setting the

    // visible name

 

    button.innerHTML = fn.replace('_', ' ');

    button.onclick = (function (fn) {

      return function () { fn(); };

    })(Commands[fn]);

 

    // Add the button with a space under it

 

    panel.appendChild(button);

    panel.appendChild(document.createTextNode('\u00a0'));

  }

}

 

function launchViewer(docId, upVec, zoomFunc) {

 

  // Assume the default "world up vector" of the Y-axis

  // (only atypical models such as Morgan and Front Loader require

  // the Z-axis to be set as up)

 

  upVec =

    typeof upVec !== 'undefined' ?

      upVec :

      new THREE.Vector3(0, 1, 0);

 

  // Ask for the page to be fullscreen

  // (can only happen in a function called from a

  // button-click handler or some other UI event)

 

  requestFullscreen();

 

  // Hide the controls that brought us here

 

  var controls = document.getElementById('control');

  controls.style.visibility = 'hidden';

 

  // Bring the layer with the viewers to the front

  // (important so they also receive any UI events)

 

  var layer1 = document.getElementById('layer1');

  var layer2 = document.getElementById('layer2');

  layer1.style.zIndex = 1;

  layer2.style.zIndex = 2;

 

  // Store the up vector in a global for later use

 

  upVector = new THREE.Vector3().copy(upVec);

 

  // The same for the optional Initial Zoom function

 

  if (zoomFunc)

    initZoom = zoomFunc;

 

  // Get our access token from the internal web-service API

 

  $.get('http://' + window.location.host + '/api/token',

    function (accessToken) {

 

      // Specify our options, including the provided document ID

 

      var options = {};

      options.env = 'AutodeskProduction';

      options.accessToken = accessToken;

      options.document = docId;

 

      // Create and initialize our two 3D viewers

 

      var elem = document.getElementById('viewLeft');

      viewerLeft = new Autodesk.Viewing.Viewer3D(elem, {});

 

      Autodesk.Viewing.Initializer(options, function () {

        viewerLeft.initialize();

        loadDocument(viewerLeft, options.document);

      });

 

      elem = document.getElementById('viewRight');

      viewerRight = new Autodesk.Viewing.Viewer3D(elem, {});

 

      Autodesk.Viewing.Initializer(options, function () {

        viewerRight.initialize();

        loadDocument(viewerRight, options.document);

      });

    }

  );

}

When you launch the HTML page it looks a bit different from last time, but only in the fact there’s now a choice of models to select from.

Here’s a slightly faked view of the UI on a mobile device (I’ve combined two screenshots to get the full UI on one screen):

The choice of models

We’ve seen plenty of the Morgan model, but here’s a quick taste of the others. There isn’t currently a back button in the UI, so you’ll have to reload the page to switch between models.

Robot Arm

Front Loader

Suspension

V8 Engine

I haven’t included the “Chassis” model, here: for some reason this looks great on my PC but is all black on my Android device. I’m not sure why, but I’ve nonetheless left it in the model list, for now.

I’ve now arrived in San Francisco and have been finally able to test with DODOcase’s Google Cardboard viewer. And it looks really good! I was expecting to have to tweak the camera offset, but that seems to be fine. I was also concerned I’d need to put a spherical warp on each viewer to compensate for lens distortion, but honestly that seems unnecessary, too. Probably because we’re dealing with a central object view rather than walking through a scene.

I have to admit to finding the experience quite compelling. If you’re coming to AU or to the upcoming DevDays tour then you’ll be able to see for yourself there. Assuming you don’t want to buy or build your own and try it in the meantime, of course. :-)

October 15, 2014

Creating a stereoscopic viewer for Google Cardboard using the Autodesk 360 viewer – Part 2

I’m heading out the door in a few minutes to take the train to Zurich and a (thankfully direct) flight from there to San Francisco. I’ll have time on the flight to write up the next part in the series, so all will be in place for this weekend’s VR Hackathon.

In today’s post we’re going to extend the implementation we saw yesterday (and introduced on Monday) by adding full-screen viewing and device-tilt navigation.

Full-screen mode is easy: I borrowed some code from here that works well, the only thing to keep in mind is that the API can only be called in a UI event handler (such as when someone has pressed a button). This is clearly intended to stop naughty pages from forcing you into full-screen mode on load. So we’re adding a single, huge “Start” button to launch the viewer. Nothing particularly interesting, although we do hide – and change the Z-order on – some divs to make an apparently multi-page UI happen via a single HTML file. We’ll extend this approach in tomorrow’s post to show more buttons, one for each hosted model.

Device-tilt support is only a little more involved: the window has a ‘deviceorientation’ event we can listen to that gives us alpha/beta/gamma values representing data coming from the host device’s sensors (presumably the accelerometer and magnetometer). These appear to be given irrespective of the actual orientation (meaning whether it’s in portrait or landscape mode). We’re only interested in landscape mode, so we need to look at the alpha value for the horizontal (left-right) rotation and gamma for the vertical (front-back) rotation. The vertical rotation can be absolute, but we want to fix the left-right rotation based on an initial direction – horizontal rotations after that should be relative to that initial direction.

The HTML page hasn’t changed substantially – it has some additional styles, but that’s about it.

Here are the relevant additions to the referenced JavaScript file (I’ve omitted the UI changes and the event handler subscription – you can get the full source here).

function orb(e) {

 

  if (e.alpha && e.gamma) {

 

    // Remove our handlers watching for camera updates,

    // as we'll make any changes manually

    // (we won't actually bother adding them back, afterwards,

    // as this means we're in mobile mode and probably inside

    // a Google Cardboard holder)

 

    unwatchCameras();

 

    // Our base direction allows us to make relative horizontal

    // rotations when we rotate left & right

 

    if (!baseDir)

      baseDir = e.alpha;

 

    if (viewerLeft.running && viewerRight.running) {

 

      var deg2rad = Math.PI / 180;

 

      // gamma is the front-to-back in degrees (with

      // this screen orientation) with +90/-90 being

      // vertical and negative numbers being 'downwards'

      // with positive being 'upwards'

 

      var vert = (e.gamma + (e.gamma <= 0 ? 90 : -90)) * deg2rad;

 

      // alpha is the compass direction the device is

      // facing in degrees. This equates to the

      // left - right rotation in landscape

      // orientation (with 0-360 degrees)

 

      var horiz = (e.alpha - baseDir) * deg2rad;

 

      orbitViews(vert, horiz);

    }

  }

}

 

function orbitViews(vert, horiz) {

 

  // We'll rotate our position based on the initial position

  // and the target will stay the same

 

  var pos = new THREE.Vector3().copy(leftPos);

  var trg = viewerLeft.navigation.getTarget();

 

  // Start by applying the left/right orbit

  // (we need to check the up/down value, though)

 

  var zAxis = new THREE.Vector3(0, 0, 1);

  pos.applyAxisAngle(zAxis, (vert < 0 ? horiz + Math.PI : horiz));

 

  // Now add the up/down rotation

 

  var axis = new THREE.Vector3().subVectors(pos, trg).normalize();

  axis.cross(zAxis);

  pos.applyAxisAngle(axis, vert);

 

  // Zoom in with the lefthand view

 

  zoom(viewerLeft, pos.x, pos.y, pos.z, trg.x, trg.y, trg.z);

 

  // Get a camera slightly to the right

 

  var pos2 = offsetCameraPos(viewerLeft, pos, trg, true);

 

  // And zoom in with that on the righthand view, too

 

  var up = viewerLeft.navigation.getCameraUpVector();

 

  zoom(

    viewerRight,

    pos2.x, pos2.y, pos2.z,

    trg.x, trg.y, trg.z,

    up.x, up.y, up.z

  );

}

So how can we test this? Obviously with a physical device – and I recommend using Chrome on an Android device for best results – or you can choose to use Google Chrome Canary on your PC (whether Mac or Windows). Canary is the codename for the next version of Chrome that’s currently in Beta: I don’t actually know whether the next release is always called Canary, or whether this changes. As you can probably tell, this is the first time I’ve installed it. :-)

Canary currently includes some very helpful developer tools that go beyond what’s in the current stable release of Chrome (which at the time of writing is version 38.0.2125.101 for me, at least). The version of Chrome Canary I have installed is version 40.0.2185.0.

Here’s the main page loaded in Chrome Canary with the enhanced developer tools showing:

Our page in Chrome Canary

The important part is the bottom-right pane which includes sensor emulation information. For more information on enabling this (which you do via the blue “mobile device” icon at the top, next to the search icon) check the online Chrome developer docs.

You can either enter absolute values – which is in itself very handy – or grab onto the device and wiggle it around (which helps emulate more realistic device usage, I expect).Canary device-tilt

Again, here’s the page for you to try yourself.

In tomorrow’s post we’ll extend this implementation to look at other models, refactoring some of the UI and viewer control code in the process.

October 14, 2014

Creating a stereoscopic viewer for Google Cardboard using the Autodesk 360 viewer – Part 1

After yesterday’s introduction to this series of posts, today we’re going to dive into some specifics, implementing a basic, web-based, stereoscopic viewer.

While this series of posts is really about using Google Cardboard to view Autodesk 360 models in 3D (an interesting topic, I hope you’ll agree ;-), it’s also about how easily you can use the Autodesk 360 viewer to power Google Cardboard: we’ll see it’s a straightforward way to get 3D content into a visualization system that’s really all about 3D.

Let’s start with some basics. We clearly need two views in our web-page, one for each eye. For now we’re not going to worry about making the page full-screen – which basically means hiding the address bar – as we’ll address that when we integrate device-tilt navigation tomorrow. But the web-page will fill the screen estate that we have, of course.

Our basic stereoscopic 3D viewer

The Autodesk 360 viewer doesn’t currently support multiple viewports on a single scene – even if this is a capability that Three.js provides – so for now we’re going to embed two separate instances of the Autodesk 360 viewer. At some point the viewer will hopefully provide viewporting capability – and allow us to reduce the app’s network usage and memory footprint – but we’ll see over the coming posts that even with two separate viewer instances the app performs well.

In this post and the next we’re going to make use of the Morgan model that we saw “steampunked” using Fusion 360 and then integrated into my first Autodesk 360 application. Basically because it’s the model that’s content that can already be accessed by this particular site. On Thursday we’ll extend that to be able to choose from a selection of models.

The lighting used for this model is different from in the previous sample: “simple grey” works better on mobile devices that “riverbank”, it seems (which has much more going on in terms of lights and environment backgrounds, etc.).

I’m looking at this viewer as an “object viewer”, which allows us to spin the camera around a fixed point of interest and view it from different angles, rather than a “walk-/fly-through viewer”. This is a choice, of course: you could easily take the foundation shown in this series and make a viewer that’s better-suited for viewing an architectural model from the inside, for instance.

OK, before we go much further, I should probably add this caveat: I don’t actually yet have a Google Cardboard device in my possession. I have a Nexus 4 phone – which has Android 4.4.4 and can run the native Google Cardboard app as well as host WebGL for a web-based viewer implementation – but I don’t actually have the lenses, etc. I have a DODOcase VR Cardboard Toolkit waiting for me in San Francisco, but until now I haven’t tested to see whether the stereoscopic effect works or not. I’ve squinted at the screen from close up, of course, but haven’t yet seen anything jump out in 3D. That said, Jim Quanci assures me it looks great with the proper case, so I’m fairly sure I’m not wasting everyone’s time with these posts.

The main “known unknown” until I test firsthand has been the distance to be used between the two camera positions. Three.js allows us to translate a camera in the X direction (relative to its viewing direction along Z, which basically means pan left or right) very easily, but I’ve had to guess a little with the distance. For now I’ve taken 4% of the distance between the camera and the target – as this gives a very slight difference between the views for various models I tried – but this value may need some tweaking.

Beyond working out the camera positions of the two views, the main work is about keeping them in sync: if the lefthand view changes then the righthand view should adjust to keep the stereo effect and vice-versa. In my first implementation I used a number of HTML5 events to do this: click, mouseup, mousemove, touchstart, touchend, touchcancel, touchleave & touchmove. And then I realised that there was no simple way to hook into zoom, which drove me crazy for a while. Argh. But then I realised I could hook into the viewer’s cameraChanged event, instead, which was much better (although this gets called for any change in the viewer, and you also need to make sure you don’t get into some circular modifications, leading to your model disappearing into the distance… :-).

Here’s an animated GIF of the views being synchronised successfully between the two embedded viewers inside a desktop browser:

Stereo Morgan

Now for some code… here’s the HTML page (which I’ve named stereo-basic.html) for the simple, stereoscopic viewer. I’ve embedded the styles but have kept the JavaScript in a separate file for easier debugging.

<!DOCTYPE html>

<html>

  <head>

    <meta charset="utf-8">

    <title>Basic Stereoscopic Viewer</title>

    <link rel="shortcut icon" type="image/x-icon" href="/favicon.ico?v=2">

    <meta

      name="viewport"

      content=

        "width=device-width, minimum-scale=1.0, maximum-scale=1.0" />

    <meta charset="utf-8">

    <link

      rel="stylesheet"

      href="https://developer.api.autodesk.com/viewingservice/v1/viewers/style.css"

      type="text/css">

    <script

      src=

        "https://developer.api.autodesk.com/viewingservice/v1/viewers/viewer3D.min.js">

    </script>

    <script src="js/jquery.js"></script>

    <script src="js/stereo-basic.js"></script>

    <style>

      body {

        margin: 0px;

        overflow: hidden;

      }

    </style>

  </head>

  <body onload="initialize();" oncontextmenu="return false;">

    <table width="100%" height="100%">

      <tr>

        <td width="50%">

          <div id="viewLeft" style="width:50%; height:100%;"></div>

        </td>

        <td width="50%">

          <div id="viewRight" style="width:50%; height:100%;"></div>

        </td>

      </tr>

    </table>

  </body>

</html>

And here’s the referenced JavaScript file:

var viewerLeft, viewerRight;

var updatingLeft = false, updatingRight = false;

var leftLoaded = false, rightLoaded = false, cleanedModel = false;

 

function initialize() {

 

  // Get our access token from the internal web-service API

 

  $.get('http://' + window.location.host + '/api/token',

    function (accessToken) {

 

      // Specify our options, including the document ID

 

      var options = {};

      options.env = 'AutodeskProduction';

      options.accessToken = accessToken;

      options.document =

       'dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c3RlYW1idWNrL1NwTTNXNy5mM2Q=';

 

      // Create and initialize our two 3D viewers

 

      var elem = document.getElementById('viewLeft');

      viewerLeft = new Autodesk.Viewing.Viewer3D(elem, {});

 

      Autodesk.Viewing.Initializer(options, function () {

        viewerLeft.initialize();

        loadDocument(viewerLeft, options.document);

      });

 

      elem = document.getElementById('viewRight');

      viewerRight = new Autodesk.Viewing.Viewer3D(elem, {});

 

      Autodesk.Viewing.Initializer(options, function () {

        viewerRight.initialize();

        loadDocument(viewerRight, options.document);

      });

    }

  );

}

 

function loadDocument(viewer, docId) {

 

  // The viewer defaults to the full width of the container,

  // so we need to set that to 50% to get side-by-side

 

  viewer.container.style.width = '50%';

  viewer.resize();

 

  // Let's zoom in and out of the pivot - the screen

  // real estate is fairly limited - and reverse the

  // zoom direction

 

  viewer.navigation.setZoomTowardsPivot(true);

  viewer.navigation.setReverseZoomDirection(true);

 

  if (docId.substring(0, 4) !== 'urn:')

    docId = 'urn:' + docId;

 

  Autodesk.Viewing.Document.load(docId,

    function (document) {

 

      // Boilerplate code to load the contents

 

      var geometryItems = [];

 

      if (geometryItems.length == 0) {

        geometryItems =

          Autodesk.Viewing.Document.getSubItemsWithProperties(

            document.getRootItem(),

            { 'type': 'geometry', 'role': '3d' },

            true

          );

      }

      if (geometryItems.length > 0) {

        viewer.load(document.getViewablePath(geometryItems[0]));

      }

 

      // Add our custom progress listener and set the loaded

      // flags to false

 

      viewer.addEventListener('progress', progressListener);

      leftLoaded = rightLoaded = false;

    },

    function (errorMsg, httpErrorCode) {

      var container = document.getElementById('viewerLeft');

      if (container) {

        alert('Load error ' + errorMsg);

      }

    }

  );

}

 

// Progress listener to set the view once the data has started

// loading properly (we get a 5% notification early on that we

// need to ignore - it comes too soon)

 

function progressListener(e) {

 

  // If we haven't cleaned this model's materials and set the view

  // and both viewers are sufficiently ready, then go ahead

 

  if (!cleanedModel &&

    ((e.percent > 0.1 && e.percent < 5) || e.percent > 5)) {

 

    if (e.target.clientContainer.id === 'viewLeft')

      leftLoaded = true;

    else if (e.target.clientContainer.id === 'viewRight')

      rightLoaded = true;

 

    if (leftLoaded && rightLoaded && !cleanedModel) {

 

      // Iterate the materials to change any red ones to grey

 

      for (var p in viewerLeft.impl.matman().materials) {

        var m = viewerLeft.impl.matman().materials[p];

        if (m.color.r >= 0.5 && m.color.g == 0 && m.color.b == 0) {

          m.color.r = m.color.g = m.color.b = 0.5;

          m.needsUpdate = true;

        }

      }

      for (var p in viewerRight.impl.matman().materials) {

        var m = viewerRight.impl.matman().materials[p];

        if (m.color.r >= 0.5 && m.color.g == 0 && m.color.b == 0) {

          m.color.r = m.color.g = m.color.b = 0.5;

          m.needsUpdate = true;

        }

      }

 

      // Zoom to the overall view initially

 

      zoomEntirety(viewerLeft);

      setTimeout(function () { transferCameras(true); }, 0);

 

      cleanedModel = true;

    }

  }

  else if (cleanedModel && e.percent > 10) {

 

    // If we have already cleaned and are even further loaded,

    // remove the progress listeners from the two viewers and

    // watch the cameras for updates

 

    unwatchProgress();

 

    watchCameras();

  }

}

 

// Add and remove the pre-viewer event handlers

 

function watchCameras() {

  viewerLeft.addEventListener('cameraChanged', left2right);

  viewerRight.addEventListener('cameraChanged', right2left);

}

 

function unwatchCameras() {

  viewerLeft.removeEventListener('cameraChanged', left2right);

  viewerRight.removeEventListener('cameraChanged', right2left);

}

 

function unwatchProgress() {

  viewerLeft.removeEventListener('progress', progressListener);

  viewerRight.removeEventListener('progress', progressListener);

}

 

// Event handlers for the cameraChanged events

 

function left2right() {

  if (!updatingRight) {

    updatingLeft = true;

    transferCameras(true);

    setTimeout(function () { updatingLeft = false; }, 500);

  }

}

 

function right2left() {

  if (!updatingLeft) {

    updatingRight = true;

    transferCameras(false);

    setTimeout(function () { updatingRight = false; }, 500);

  }

}

 

function transferCameras(leftToRight) {

 

  // The direction argument dictates the source and target

 

  var source = leftToRight ? viewerLeft : viewerRight;

  var target = leftToRight ? viewerRight : viewerLeft;

 

  var pos = source.navigation.getPosition();

  var trg = source.navigation.getTarget();

 

  // Set the up vector manually for both cameras

 

  var upVector = new THREE.Vector3(0, 0, 1);

  source.navigation.setWorldUpVector(upVector);

  target.navigation.setWorldUpVector(upVector);

 

  // Get the new position for the target camera

 

  var up = source.navigation.getCameraUpVector();

 

  // Get the position of the target camera

 

  var newPos = offsetCameraPos(source, pos, trg, leftToRight);

 

  // Save the left-hand camera position: device tilt orbits

  // will be relative to this point

 

  leftPos = leftToRight ? pos : newPos;

 

  // Zoom to the new camera position in the target

 

  zoom(

    target, newPos.x, newPos.y, newPos.z, trg.x, trg.y, trg.z,

    up.x, up.y, up.z

  );

}

 

function offsetCameraPos(source, pos, trg, leftToRight) {

 

  // Get the distance from the camera to the target

 

  var xd = pos.x - trg.x;

  var yd = pos.y - trg.y;

  var zd = pos.z - trg.z;

  var dist = Math.sqrt(xd * xd + yd * yd + zd * zd);

 

  // Use a small fraction of this distance for the camera offset

 

  var disp = dist * 0.04;

 

  // Clone the camera and return its X translated position

 

  var clone = source.autocamCamera.clone();

  clone.translateX(leftToRight ? disp : -disp);

  return clone.position;

}

 

// Model-specific helper to zoom into a specific part of the model

 

function zoomEntirety(viewer) {

  zoom(viewer, -48722.5, -54872, 44704.8, 10467.3, 1751.8, 1462.8);

}

 

// Set the camera based on a position, target and optional up vector

 

function zoom(viewer, px, py, pz, tx, ty, tz, ux, uy, uz) {

 

  // Make sure our up vector is correct for this model

 

  var upVector = new THREE.Vector3(0, 0, 1);

  viewer.navigation.setWorldUpVector(upVector, true);

 

  var up =

    (ux && uy && uz) ? new THREE.Vector3(ux, uy, uz) : upVector;

 

  viewer.navigation.setView(

    new THREE.Vector3(px, py, pz),

    new THREE.Vector3(tx, ty, tz)

  );

  viewer.navigation.setCameraUpVector(up);

}

To host something similar yourself, I recommend starting with the post I linked to earlier and building it up from there (you basically need to provide the ‘/api/token’ server API – using your own client credentials – for this to work).

But you don’t need to build it yourself – or even have an Android device – to give this a try. Simply load the HTML page in your preferred WebGL-capable browser (Chrome is probably safest, considering that’s what I’ve been using when developing this) and have a play.

On a PC it will respond to mouse or touch navigation, of course, but in tomorrow’s post we’ll implement a much more interesting – at least with respect to Google Cardboard, where you can’t get your fingers near the screen to navigate – tilt-based navigation mechanism. We’ll also take a look at how we can use Google Chrome Canary to emulate device-tilt on a PC, reducing the need to jump through the various hoops needed to debug remotely. Interesting stuff. :-)

October 13, 2014

Gearing up for the VR Hackathon

I’m heading back across to the Bay Area on Wednesday for 10 days. There seems to be a pattern forming to my trips across: I’ll spend the first few days in San Francisco – in this case attending internal strategy meetings in our 1 Market office – and then head up after the weekend to San Rafael to work with the members of the AutoCAD engineering team based up there. I’ll still probably head back into SF for the odd day, the following week, but that’s fine: I really like commuting by ferry from Larkspur to the Embarcadero.

The weekend I’m spending in the Bay Area is looking to have a slightly different shape this time, though. Rather than just catching up with old friends (which I still hope to do), I’ve signed up for the VR Hackathon, an event that looks really interesting. I was happy to find out about this one and that it fell exactly during my stay. I’ve even roped a few colleagues into coming along, too.

VR Hackathon

Looking at the “challenges” posted for the hackathon, it seemed worth taking a look at web and mobile VR, as these look like the two that I’m most likely to be able to contribute towards. Which led me to reaching out to Jim Quanci and Cyrille Fauvel, over in the ADN team, to see what’s been happening with respect to VR platforms such as Oculus Rift and Google Cardboard.

It turns out the ADN team has invested in a few Oculus Rift Developer Kits, but was looking for someone to spend some time fooling around with integrating the new WebGL-based Autodesk 360 viewer with Google Cardboard. And as “fooling around” is my middle name, I signed up enthusiastically. :-)

For those of you who haven’t been following the VR space, lately, I think it’s fair to say that Facebook put the cat amongst the pigeons when they acquired Oculus. Google’s competitive response was very interesting: at this year’s Google I/O they announced Google Cardboard, a simple View-Master-like mount for a smartphone that can be used for AR or VR.

Cardboard

A few notes about the design: there are two lenses that focus the smartphone’s display – which is split in half in landscape mode, with one half for each eye – and there’s a simple magnet-based button on the left as well as an embedded NFC tag to tell the phone when to launch the Cardboard software. The rear camera has also been left clear in case you need its input for a “reality feed” in the case of AR or perhaps some additional information to help with VR.

Aside from the smartphone, the whole package can be made for a few dollars (assuming a certain economy of scale, of course) with the provided instructions. Right now you can pick them up pre-assembled for anywhere between $15 and $30 – still cheap for the capabilities provided. Which has led to the somewhat inevitable nickname of “Oculus Thrift”. :-)

The point Google is making, of course, is that you don’t need expensive, complex kit to do VR: today’s smartphones have a lot of the capabilities needed, in terms of processing power, sensors and responsive, high-resolution displays.

When looking into the possibilities for supporting Cardboard from a software perspective, there seem to be two main options: the first is to create a native Android app using their SDK, the second is to create a web-app such as those available on the Chrome Experiments site.

Given the web-based nature of the Autodesk 360 viewer, it seemed to make sense to follow the latter path. Jim and Cyrille kindly pointed me at an existing integration of Cardboard with Three.js/WebGL, which turned out to be really useful. But we’ll look at some specifics more closely in the next post.

During the rest of the week – and I expect to post each day until Thursday, at least, so check back often – I’ll cover the following topics:

  1. Creating a stereoscopic viewer for Google Cardboard using the Autodesk 360 viewer
  2. Adding tilt support for model navigation and enabling fullscreen mode
  3. Supporting multiple models

If I manage to get my hands on the pre-release Leap Motion SDK for Android then I’ll try to integrate that, too, at some point. Mounting a Leap Motion controller to the back of the goggles allows you to use hand gestures for additional (valuable) input in a VR environment… I’m thinking this may end up being the “killer app” for Leap Motion (not mine specifically, but VR in general).

Until tomorrow!

October 09, 2014

New Memento build and webinar

As reported over on Scott’s blog, Project Memento v1.0.10.5 is now available on Autodesk Labs. I won’t repeat the specific new features in this release – Scott covers those thoroughly – but I will say that I’m personally most excited about trying the improved .OBJ and .FBX export and the workflows that they enable.

Project Memento

To find out more about Memento, there’s a webinar on Wednesday October 15 at 9am Pacific talking about the tool. During the webinar, Tatjana Dzambazova – whom you may have seen in her excellent TEDx session – will cover topics such from uploading photos, working with highly detailed meshes and 3D printing the results.

recap.autodesk.comAnd in somewhat related news, the ReCap website has received a welcome refresh. Head on over and check it out!

October 08, 2014

Autodesk software is free for students, teachers and schools (yes, really)

I mentioned this initiative a few months ago, but it turns it hadn’t been rolled out everywhere: there were regional exceptions meaning that students in certain countries weren’t eligible for the program at that point. So my apologies if it sounds like I’m repeating myself, but at least it’s good news that I’m announcing twice. :-)students.autodesk.com

The last kinks have been ironed out of the program, so now students, teachers and schools anywhere in the world can now download and use the following Autodesk software for freeFree software list

So if you’re a student who was expecting to be able to get free Autodesk software tools based on my previous post but it didn’t work out, check again on students.autodesk.com – this time it will!

October 06, 2014

Update on Spark, Autodesk’s 3D printing platform

There’s been a lot in the news about Spark – Autodesk’s entry into the 3D printing market – of late. Earlier in the year we announced this open platform and a reference design for it, but in the last few weeks things have become even more interesting: specific examples of partnerships with companies who are building their own printers based on Spark have started to emerge. I thought it worth aggregating a few of the more interesting articles for those who might have missed them.

I’m personally really interested in the approach Autodesk is taking here. It seems to me that the “additive manufacturing” space is currently dominated by vendors trying to monetize both the upfront hardware investment and the consumables, which are often proprietary (i.e. the razor and the blades). And they’re providing software that’s really an afterthought rather than being considered of prime importance to the customer.

Opening up the platform to people wanting to drive innovation in materials and/or software should have a positive impact on the industry. And presumably be a good thing for users connecting Autodesk design tools with Spark-powered devices, of course.

Autodesk's 1st 3D printerHere’s an interesting interview where Autodesk’s CTO, Jeff Kowalski, provides some useful background information, including how the Spark platform and the coming Autodesk-branded 3D printers are analogous to Android and Google’s Nexus devices, respectively. And those who have managed to get their hands on the first Spark-based DLP printer are suitably impressed.

As an example of the type of innovation that could conceivably end up in the Spark software platform (I have no idea whether it’s part of the plan or not, mind), check out an Autodesk Research project announced at this week’s UIST (User Interface Software and Technology) Syposium:




PipeDream allows you to create internal pipes and tubes in your 3D-printed models as conduits for wires or for air leading to sensors or even actuators providing haptic feedback.

Local Motors' Strati 
Local Motors was the first to announce a partnership with Autodesk, incorporating Spark into the process for creating the Strati, the first ever 3D-printed car.

Dremel's 3D Idea Builder

A household name in handheld tool systems, Dremel then announced their own 3D printer based on Spark (this one based on FDM).

3DPrintshow's 2014 Brand of the Year

It’s clearly been an interesting few months since Autodesk announced this new focus on 3D printing back in May. In recognition of this – and I have to admit to finding this pretty incredible, personally – 3D Printshow named Autodesk as their 2014 Brand of the Year.

Spark blog

If you find this kind of news interesting, be sure to check this new blog dedicated to Spark on a regular basis – or simply follow the Spark Twitter account. Developments are coming thick and fast!

photo credit: automobileitalia via photopin cc

October 03, 2014

Connecting Three.js to an AutoCAD model – Part 2

To follow on from yesterday’s post, today we’re going to look at two C# source files that work with the HTML page – and referenced JavaScript files – which I will leave online rather than reproducing here.

As a brief reminder of the functionality – if you haven’t yet watched the screencast shown last time – this version of the app shows an embedded 3D view that reacts to the creation – and deletion – of geometry from the associated AutoCAD model. You will see the bounding boxes for geometry appear in the WebGL view (powered by Three.js) as you’re modeling.


Three.js integration with AutoCAD 

The code is a bit different to the approach we took to display the last area, earlier in the week: we do look for entities that are added to/removed from the document we care about, but we pass through the list of those added/removed by each command, not just the area of the latest. On the JavaScript side of things we add the handle of the associated entity as the Three.js name, allowing us to retrieve the object again in case it gets erased.

This is ultimately a more interesting approach for people wanting to track more detailed information about modeling operations (although admittedly we’re still only passing geometric extents and the handle – we’re not dealing with more complicated data in this “simple” sample).

Here’s the first of the C# source files, which defines the AutoCAD commands to create a palette or an HTML document inside AutoCAD (this latter one is now a bit boring in comparison: it creates a static snapshot of the launching document, but doesn’t track any changes afterwards… the palette is a lot more fun :-).

using Autodesk.AutoCAD.ApplicationServices;

using Autodesk.AutoCAD.DatabaseServices;

using Autodesk.AutoCAD.Geometry;

using Autodesk.AutoCAD.Runtime;

using Autodesk.AutoCAD.Windows;

using Newtonsoft.Json;

using System;

using System.Runtime.InteropServices;

 

namespace JavaScriptSamples

{

  public class ThreeCommands

  {

    private PaletteSet _3ps = null;

    private static Document _curDoc = null;

    private static ObjectIdCollection _add =

      new ObjectIdCollection();

    private static ObjectIdCollection _remove =

      new ObjectIdCollection();

 

    [DllImport(

      "AcJsCoreStub.crx", CharSet = CharSet.Auto,

      CallingConvention = CallingConvention.Cdecl,

      EntryPoint = "acjsInvokeAsync")]

    extern static private int acjsInvokeAsync(

      string name, string jsonArgs

    );

 

    [CommandMethod("THREE")]

    public void ThreePalette()

    {

      // We're storing the "launch document" as we're attaching

      // various event handlers to it

 

      _curDoc =

        Application.DocumentManager.MdiActiveDocument;

 

      // Only attach event handlers if the palette isn't already

      // there (in which case it will already have them)

 

      var attachHandlers = (_3ps == null);

 

      _3ps =

        Utils.ShowPalette(

          _3ps,

          new Guid("9CEE43FF-FDD7-406A-89B2-6A48D4169F71"),

          "THREE",

          "Three.js Examples",

          GetHtmlPathThree()

        );

 

      if (attachHandlers && _curDoc != null) {

 

        Application.DocumentManager.DocumentActivated +=

          OnDocumentActivated;

 

        _curDoc.BeginDocumentClose +=

          (s, e) =>

          {

            RemoveHandlers(_curDoc);

            _curDoc = null;

          };

 

        _3ps.SizeChanged += OnPaletteSizeChanged;

 

        // When the PaletteSet gets destroyed we remove

        // our event handlers

 

        _3ps.PaletteSetDestroy += OnPaletteSetDestroy;

      }

    }

 

    [CommandMethod("THREEDOC")]

    public void ThreeDocument()

    {

      _curDoc = Application.DocumentManager.MdiActiveDocument;

 

      if (_curDoc != null)

      {

        _curDoc.BeginDocumentClose +=

          (s, e) => _curDoc = null;

      }

 

      Application.DocumentWindowCollection.AddDocumentWindow(

        "Three.js Document", GetHtmlPathThree()

      );

    }

 

    [JavaScriptCallback("ViewExtents")]

    public string ViewExtents(string jsonArgs)

    {

      // Default return value is failure

 

      var res = "{\"retCode\":1}";

 

      if (_curDoc != null)

      {

        var vw = _curDoc.Editor.GetCurrentView();

        var ext = Utils.ScreenExtents(vw);

        res =

          String.Format(

            "{{\"retCode\":0, \"result\":" +

            "{{\"min\":{0},\"max\":{1}}}}}",

            JsonConvert.SerializeObject(ext.MinPoint),

            JsonConvert.SerializeObject(ext.MaxPoint)

          );

      }

      return res;

    }

 

    [JavaScriptCallback("ThreeSolids")]

    public string ThreeSolids(string jsonArgs)

    {

      return Utils.GetSolids(_curDoc, Point3d.Origin);

    }

 

    private void OnPaletteSizeChanged(

      object s, PaletteSetSizeEventArgs e

    )

    {

      Refresh();

    }

 

    private void OnDocumentActivated(

      object s, DocumentCollectionEventArgs e

    )

    {

      if (_3ps != null && e.Document != _curDoc)

      {

        // We're going to monitor when objects get added and

        // erased. We'll use CommandEnded to refresh the

        // palette at most once per command (might also use

        // DocumentManager.DocumentLockModeWillChange)

 

        // The document is dead...

 

        RemoveHandlers(_curDoc);

        _add.Clear();

        _remove.Clear();

 

        // ... long live the document!

 

        _curDoc = e.Document;

        AddHandlers(_curDoc);

 

        Refresh();

      }

    }

 

    private void AddHandlers(Document doc)

    {

      if (doc != null)

      {

        if (doc.Database != null)

        {

          doc.Database.ObjectAppended += OnObjectAppended;

          doc.Database.ObjectErased += OnObjectErased;

        }

        doc.CommandEnded += OnCommandEnded;

      }

    }

 

    private void RemoveHandlers(Document doc)

    {

      if (doc != null)

      {

        if (doc.Database != null)

        {

          doc.Database.ObjectAppended -= OnObjectAppended;

          doc.Database.ObjectErased -= OnObjectErased;

        }

        doc.CommandEnded -= OnCommandEnded;

      }

    }

 

    private void OnObjectAppended(object s, ObjectEventArgs e)

    {

      if (e != null && e.DBObject is Solid3d)

      {       

        _add.Add(e.DBObject.ObjectId);

      }

    }

 

    private void OnObjectErased(object s, ObjectErasedEventArgs e)

    {

      if (e != null && e.DBObject is Solid3d)

      {

        var id = e.DBObject.ObjectId;

        if (e.Erased)

        {

          if (!_remove.Contains(id))

          {

            _remove.Add(id);

          }

        }

        else

        {

          if (!_add.Contains(id))

          {

            _add.Add(e.DBObject.ObjectId);

          }

        }

      }

    }

 

    private void OnCommandEnded(object s, CommandEventArgs e)

    {

      // Invoke our JavaScript functions to update the palette

 

      if (_add.Count > 0)

      {

        if (_3ps != null)

        {

          var sols =

            Utils.SolidInfoForCollection(

              (Document)s, Point3d.Origin, _add

            );

          acjsInvokeAsync("addsols", Utils.SolidsString(sols));

        }

        _add.Clear();

      }

 

      if (_remove.Count > 0)

      {

        if (_3ps != null)

        {

          acjsInvokeAsync("remsols", Utils.GetHandleString(_remove));

          _remove.Clear();

        }

      }

    }

 

    private void OnPaletteSetDestroy(object s, EventArgs e)

    {

      // When our palette is closed, detach the various

      // event handlers

 

      if (_curDoc != null)

      {

        RemoveHandlers(_curDoc);

        _curDoc = null;

      }

    }

 

    private void Refresh()

    {

      if (_3ps != null && _3ps.Count > 0)

      {

        acjsInvokeAsync("refsols", "{}");

      }

    }

 

    private static Uri GetHtmlPathThree()

    {

      return new Uri(Utils.GetHtmlPath() + "threesolids2.html");

    }

  }

}

This file depends on a shared Utils.cs file:

using Autodesk.AutoCAD.ApplicationServices;

using Autodesk.AutoCAD.DatabaseServices;

using Autodesk.AutoCAD.Geometry;

using Autodesk.AutoCAD.Windows;

using Newtonsoft.Json;

using System;

using System.Collections.Generic;

using System.IO;

using System.Reflection;

using System.Text;

 

namespace JavaScriptSamples

{

  internal class Utils

  {

    // Helper to get the document a palette was launched from

    // in the case where the active document is null

 

    internal static Document GetActiveDocument(

      DocumentCollection dm, Document launchDoc = null

    )

    {

      // If we're called from an HTML document, the active

      // document may be null

 

      var doc = dm.MdiActiveDocument;

      if (doc == null)

      {

        doc = launchDoc;

      }

      return doc;

    }

 

    internal static string GetSolids(

      Document launchDoc, Point3d camPos, bool sort = false

    )

    {

      var doc =

        Utils.GetActiveDocument(

          Application.DocumentManager,

          launchDoc

        );

 

      // If we didn't find a document, return

 

      if (doc == null)

        return "";

 

      // We could probably get away without locking the document

      // - as we only need to read - but it's good practice to

      // do it anyway

 

      using (var dl = doc.LockDocument())

      {

        var db = doc.Database;

        var ed = doc.Editor;

 

        var ids = new ObjectIdCollection();

 

        using (

          var tr = doc.TransactionManager.StartOpenCloseTransaction()

        )

        {

          // Start by getting the modelspace

 

          var ms =

            (BlockTableRecord)tr.GetObject(

              SymbolUtilityServices.GetBlockModelSpaceId(db),

              OpenMode.ForRead

            );

 

          // If in palette mode we can get the camera from the

          // Editor, otherwise we rely on what was provided when

          // the HTML document was launched

 

          if (launchDoc == null)

          {

            var view = ed.GetCurrentView();

            camPos = view.Target + view.ViewDirection;

          }

 

          // Get each Solid3d in modelspace and add its extents

          // to the sorted list keyed off the distance from the

          // closest face of the solid (not necessarily true,

          // but this only really is a crude approximation)

 

          foreach (var id in ms)

          {

            ids.Add(id);

          }

          tr.Commit();

        }

 

        var sols = SolidInfoForCollection(doc, camPos, ids, sort);

 

        return SolidsString(sols);

      }

    }

 

    internal static List<Tuple<double,string, Extents3d>>

    SolidInfoForCollection(

      Document doc, Point3d camPos, ObjectIdCollection ids,

      bool sort = false

    )

    {

      // We'll sort our list of extents objects based on a

      // distance value

 

      var sols =

        new List<Tuple<double, string, Extents3d>>();

 

      using (

        var tr = doc.TransactionManager.StartOpenCloseTransaction()

      )

      {

        foreach (ObjectId id in ids)

        {

          var obj = tr.GetObject(id, OpenMode.ForRead);

          var sol = obj as Entity;//Solid3d;

          if (sol != null)

          {

            var ext = sol.GeometricExtents;

            var tmp =

              ext.MinPoint + 0.5 * (ext.MaxPoint - ext.MinPoint);

            var mid = new Point3d(ext.MinPoint.X, tmp.Y, tmp.Z);

            var dist = camPos.DistanceTo(mid);

            sols.Add(

              new Tuple<double, string, Extents3d>(

                dist, sol.Handle.ToString(), ext

              )

            );

          }

        }

      }

 

      if (sort)

      {

        sols.Sort((sol1,sol2)=>sol2.Item1.CompareTo(sol1.Item1));

      }

      return sols;

    }

 

    // Helper function to build a JSON string containing a

    // sorted extents list

 

    internal static string SolidsString(

      List<Tuple<double, string, Extents3d>> lst)

    {

      var sb = new StringBuilder("{\"retCode\":0, \"result\":[");

 

      var first = true;

      foreach (var tup in lst)

      {

        if (!first)

          sb.Append(",");

 

        first = false;

        var hand = tup.Item2;

        var ext = tup.Item3;

 

        sb.AppendFormat(

          "{{\"min\":{0},\"max\":{1},\"handle\":\"{2}\"}}",

          JsonConvert.SerializeObject(ext.MinPoint),

          JsonConvert.SerializeObject(ext.MaxPoint),

          hand

        );

      }

      sb.Append("]}");

 

      return sb.ToString();

    }

 

    // Helper function to build a JSON string containing a

    // list of handles

 

    internal static string GetHandleString(ObjectIdCollection _ids)

    {

      var sb = new StringBuilder("{\"handles\":[");

      bool first = true;

      foreach (ObjectId id in _ids)

      {

        if (!first)

        {

          sb.Append(",");

        }

 

        first = false;

 

        sb.AppendFormat(

          "{{\"handle\":\"{0}\"}}",

          id.Handle.ToString()

        );

      }

      sb.Append("]}");

      return sb.ToString();

    }

 

    // Helper function to show a palette

 

    internal static PaletteSet ShowPalette(

      PaletteSet ps, Guid guid, string cmd, string title, Uri uri,

      bool reload = false

    )

    {

      // If the reload flag is true we'll force an unload/reload

      // (this isn't strictly needed - given our refresh function -

      // but I've left it in for possible future use)

 

      if (reload && ps != null)

      {

        // Close the palette and make sure we process windows

        // messages, otherwise sizing is a problem

 

        ps.Close();

        System.Windows.Forms.Application.DoEvents();

        ps.Dispose();

        ps = null;

      }

 

      if (ps == null)

      {

        ps = new PaletteSet(cmd, guid);

      }

      else

      {

        if (ps.Visible)

          return ps;

      }

 

      if (ps.Count != 0)

      {

        ps.Remove(0);

      }

 

      ps.Add(title, uri);

      ps.Visible = true;

 

      return ps;

    }

 

    internal static Matrix3d Dcs2Wcs(AbstractViewTableRecord v)

    {

      return

        Matrix3d.Rotation(-v.ViewTwist, v.ViewDirection, v.Target) *

        Matrix3d.Displacement(v.Target - Point3d.Origin) *

        Matrix3d.PlaneToWorld(v.ViewDirection);

    }

 

    internal static Extents3d ScreenExtents(

      AbstractViewTableRecord vtr

    )

    {

      // Get the centre of the screen in WCS and use it

      // with the diagonal vector to add the corners to the

      // extents object

 

      var ext = new Extents3d();

      var vec = new Vector3d(0.5 * vtr.Width, 0.5 * vtr.Height, 0);

      var ctr =

        new Point3d(vtr.CenterPoint.X, vtr.CenterPoint.Y, 0);

      var dcs = Utils.Dcs2Wcs(vtr);

      ext.AddPoint((ctr + vec).TransformBy(dcs));

      ext.AddPoint((ctr - vec).TransformBy(dcs));

 

      return ext;

    }

 

    // Helper function to get the path to our HTML files

 

    internal static string GetHtmlPath()

    {

      // Use this approach if loading the HTML from the same

      // location as your .NET module

 

      //var asm = Assembly.GetExecutingAssembly();

      //return Path.GetDirectoryName(asm.Location) + "\\";

 

      return "http://through-the-interface.typepad.com/files/";

    }

  }

}

I’ve been banging away at the app to get it to fail: the latest version seems fairly solid, but do let me know if you come across any issues with it.

If I’m right, the kind of responsiveness this sample shows should enable all kinds of interesting HTML palette-based applications inside AutoCAD.

October 02, 2014

Connecting Three.js to an AutoCAD model – Part 1

As part of my preparations for AU, I’ve been extending this Three.js integration sample to make it more responsive to model changes: I went ahead and implemented event handlers in .NET – much as we saw in the last post – to send interaction information through to JavaScript so that it can update the HTML palette view.

The code is in pretty good shape, but I still need to decide whether to post it separately or with the other JavaScript samples I’m working on (I’ll also be showing Paper.js and Isomer integrations during my AU talk, as well as a special demo bringing ShapeShifter models into AutoCAD).

In the meantime, here’s a screencast of the Three.js updated integration in action.




My apologies for the sound quality: I’ve managed to lose my external mic and my new MacBook’s internal one pics up a lot of noise from the fan, once it gets going.

Also, if the command-list provided by Screencast is getting in the way, if you go to full-screen mode it should be easier to see what’s going on.

Feed/Share

10 Random Posts