October 2014

Sun Mon Tue Wed Thu Fri Sat
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31  

October 23, 2014

AutoCAD I/O API: a new batch processing web-service

This is really interesting news I’ve been waiting to share for a while, now. And of course it’s the answer to the question I posed in my last post (this is the service the dashboard has been monitoring). Once I get back home to Switzerland I’ll go through the various comments on the post and LinkedIn, to see who wins the prize. :-)

The AutoCAD team has been working hard on a cloud-based batch-processing framework that works with AutoCAD data. The current name for the service is the AutoCAD I/O API – Beta.

Random retro photo of a 36-pin Centronics parallel printer port

The service is powered by AcCore, the cross-platform AutoCAD “Core Engine” that was originally created when we built AutoCAD for Mac, during the “Big Split” project. (A side note: the initial working name for this service was AutoCAD Core Engine Services – or ACES – so don’t be confused if you still see references to that name.)

The service is targeted at offline operations – meaning batch processing or operations that don’t require immediate feedback – which allows us to queue the operations to execute optimally. That said, we’re usually talking about seconds to execute, rather than hours or days. :-)

In essence, the service allows developers to call through to an instance of AcCore – running up there in the cloud – to run an AutoCAD Script to perform operations related to AutoCAD data and then access the results, all through HTTP. Which means, of course, that it can be used from any device that connects to HTTP, which now includes a number of children’s toys. ;-)

That said, as with any authenticated web-service you will need a client ID and key to gain access. You will not want to share this as part of a client-side application, so you’ll need to create a lightweight web-service yourself that handles authentication, just as we saw when developing an application with Autodesk’s first PaaS offering, the Viewing & Data API.

But for testing purposes we won’t worry about that. Our first application – courtesy of my friend and colleague, Albert Szilvasy – is a simple console application that makes use of the client ID and key directly to authenticate against the AutoCAD I/O API and then use it to create a DWG containing a line and output that to PDF. (In case you’re interested in this service’s “bona fides” it is currently being used to service all PDF output requests from AutoCAD 360. And that’s really just the beginning…)

To get this working, create a simple console application project inside Visual Studio. Call it “AutoCADIoSample” – just to make sure the code works when you copy & paste it in – and add a service reference to “https://autocad.io/api/v1” called “AutoCADIo” (you’ll find step-by-step instructions here, although they currently refer to an older location for the API).

Now you should be ready to copy & paste the following C# code into the Program.cs file. You will, of course, need to apply for your own ID and key (you can do so from here) and paste them into the clientId and clientKey constants.

using System;

using System.IO;

using System.Linq;

using System.Net.Http;

using System.Data.Services.Client;

using Microsoft.IdentityModel.Clients.ActiveDirectory;


namespace AutoCADIoSample


  class Program


    const string clientId = "12345678-1234-1234-1234-123467890AB";

    const string clientKey =



    static void Main(string[] args)


      // Obtain token from active directory          


      var authCon =

        new AuthenticationContext(



      var cred = new ClientCredential(clientId,clientKey);

      var token =

        authCon.AcquireToken("https://autocad.io/api/v1", cred).



      // Instruct client side library to insert token as

      // Authorization value into each request


      var container =

        new AutoCADIo.Container(

          new Uri("http://autocad.io/api/v1/")


      container.SendingRequest2 +=

        (s, e) => e.RequestMessage.SetHeader("Authorization", token);


      // Remove any existing instances of our activity


      var actsToDel =

        container.Activities.Where(a => a.Id == "CreateALine");

      foreach (var actToDel in actsToDel)




      // Create our new activity which generates a DWG containing

      // a line and exports it to PDF


      var act =

        new AutoCADIo.Activity()


          UserId = "",

          Id = "CreateALine",

          Version = 1,

          Instruction = new AutoCADIo.Instruction()


            // The instruction is simply an AutoCAD Script


            Script =

              "_tilemode 1 _line 0,0 1,1  _tilemode 0 " +

              "_save result.dwg\n" +

              "_-export _pdf _all result.pdf\n"


          Parameters = new AutoCADIo.Parameters()


            InputParameters =


              new AutoCADIo.Parameter()


                Name = "HostDwg", LocalFileName = "$(HostDwg)"



            OutputParameters =


              new AutoCADIo.Parameter()


                Name = "DwgResult", LocalFileName = "result.dwg"


              new AutoCADIo.Parameter()


                Name = "PdfResult", LocalFileName = "result.pdf"




          RequiredEngineVersion = "20.0"



      // Add the activity to our container





      // List the available activities: should include CreateALine


      foreach (var a in container.Activities)



        Console.WriteLine("Activity Id: {0}", a.Id);

        Console.WriteLine("User Id: {0}", a.UserId);

        Console.WriteLine("Instruction: {0}", a.Instruction.Script);


          "Command Line: {0}",



          ) ? a.Instruction.CommandLineParameters :

          "/i {hostdwg} /i {instructions.scr}");

        foreach (var p in a.Parameters.InputParameters)


            "Input '{0}' will be named as '{1}' in working folder.",

            p.Name, p.LocalFileName


        foreach (var p in a.Parameters.OutputParameters)


            "Output '{0}' will cause file '{1}' to be uploaded " +

            "from working folder.", p.Name, p.LocalFileName




      // Create a workitem referencing our new activity


      var wi = new AutoCADIo.WorkItem()


        UserId = "", // Must be set to empty

        Id = "", // Must be set to empty

        Arguments = new AutoCADIo.Arguments(),

        Version = 1, // Should always be 1

        ActivityId =

          new AutoCADIo.EntityId()


            UserId = clientId, Id = "CreateALine"




      // Specify an input DWG, which will actually be a blank DWT



        new AutoCADIo.Argument()


          Name = "HostDwg", // Must match activity's input parameter

          Resource =

            "https://s3.amazonaws.com/" +


          StorageProvider = "Generic" // Generic HTTP download




      // We'll post the DWG to a specified storage location

      // (using generic HTTP rather than storing to A360)



        new AutoCADIo.Argument()


          Name = "DwgResult", // Must match activity's output param

          StorageProvider = "Generic", // Generic HTTP upload

          HttpVerb = "POST", // Use HTTP POST when delivering result

          Resource = null // Use storage provided by AutoCAD.io




      // We'll also post the PDF to a specified storage location

      // (using generic HTTP rather than storing to A360)



        new AutoCADIo.Argument()


          Name = "PdfResult", // Must match activity's output param

          StorageProvider = "Generic", // Generic HTTP upload

          HttpVerb = "POST", // Use HTTP POST when delivering result

          Resource = null // Use storage provided by AutoCAD.io




      // Add the work item to our container





      // Once saved, the work item should start executing...

      // We'll poll every 5 seconds to see if it's finished




        Console.WriteLine("Sleeping a bit...");


        container.LoadProperty(wi, "Status"); // Http request here


      while (wi.Status == "Pending" || wi.Status == "InProgress");


      Console.WriteLine("\nRequest completed. Querying results...");


      // Re-query the service so that we can use the results


      container.MergeOption = MergeOption.OverwriteChanges;

      wi =


          p => p.UserId == wi.UserId && p.Id == wi.Id



      // Resource property of the output argument "PdfResult"

      // will have the output url for the PDF

      // (for the DWG we'd do exactly the same for "DwgResult")


      var url =


          a => a.Name == "PdfResult"


      if (url != null)


        // Download the resultant PDF, store it locally


        var client = new HttpClient();

        var content =


        var pdf = "z:\\Data\\line.pdf";

        if (File.Exists(pdf))


        using (var output = File.Create(pdf))





        Console.WriteLine("PDF downloaded to \"{0}\".", pdf);



      url = wi.StatusDetails.Report;

      if (url != null)


        // Download the report, store it locally


        var client = new HttpClient();

        var content =


        var report = "z:\\Data\\AutoCADIoReport.txt";

        if (File.Exists(report))


        using (var output = File.Create(report))





        Console.WriteLine("Report downloaded to \"{0}\".", report);


      // Wait for a key to be pressed


      Console.WriteLine("Press a key to continue...");





A few words on what’s happening here.

After authenticating to use the service, we create a new Activity – think of this as being like a cloud-based “function” for us to call – which will create a DWG file and publish it to PDF.

To make use of this Activity, we need to create a WorkItem – which is like a function call providing the various arguments the function needs to operate.

Once the WorkItem has completed, we simply need to query its data via the service, as it should now have been populated by the AutoCAD I/O API with the various URLs to the output data. We can then query this data and save them to local files.

Here’s the console window output when we run this code:

AutoCADIoSample in action

Here’s the PDF:

Output PDF

And here are the contents of the report, to give you a sense for the kind of logging performed:

[10/03/2014 08:12:05] Starting work item 9c6f00ec93c1480dba00cd0974b84a46

[10/03/2014 08:12:05] Start download phase.

[10/03/2014 08:12:05] Start downloading file https://s3.amazonaws.com/AutoCAD-Core-Engine-Services/TestDwg/acad.dwt.

[10/03/2014 08:12:05] Bytes downloaded = 31419

[10/03/2014 08:12:05] https://s3.amazonaws.com/AutoCAD-Core-Engine-Services/TestDwg/acad.dwt downloaded as C:\Users\acesworker\AppData\LocalLow\jobs\9c6f00ec93c1480dba00cd0974b84a46\acad.dwt.

[10/03/2014 08:12:05] End download phase.

[10/03/2014 08:12:05] Start preparing script and command line parameters.

[10/03/2014 08:12:05] Start script content.

[10/03/2014 08:12:05] _tilemode 1 _line 0,0 1,1  _tilemode 0 _save result.dwg

_-export _pdf _all result.pdf


[10/03/2014 08:12:05] End script content.

[10/03/2014 08:12:05] Command line: /i "C:\Users\acesworker\AppData\LocalLow\jobs\9c6f00ec93c1480dba00cd0974b84a46\acad.dwt" /isolate job_9c6f00ec93c1480dba00cd0974b84a46 "C:\Users\acesworker\AppData\LocalLow\jobs\9c6f00ec93c1480dba00cd0974b84a46\userdata" /s "C:\Users\acesworker\AppData\LocalLow\jobs\9c6f00ec93c1480dba00cd0974b84a46\script.scr"

[10/03/2014 08:12:05] End preparing script and command line parameters.

[10/03/2014 08:12:05] Start script phase.

[10/03/2014 08:12:05] Start AutoCAD Core Console output.

[10/03/2014 08:12:05] Redirect stdout (file: C:\Users\ACESWO~1\AppData\Local\Temp\accc21082).

[10/03/2014 08:12:05] AutoCAD Core Engine Console - Copyright Autodesk, Inc 2009-2013.

[10/03/2014 08:12:05] Isolating to userId=job_9c6f00ec93c1480dba00cd0974b84a46, userDataFolder=C:\Users\acesworker\AppData\LocalLow\jobs\9c6f00ec93c1480dba00cd0974b84a46\userdata.

[10/03/2014 08:12:05] Regenerating model.

[10/03/2014 08:12:05] Command:

[10/03/2014 08:12:05] Command:

[10/03/2014 08:12:05] Command:

[10/03/2014 08:12:05] Command: _tilemode

[10/03/2014 08:12:05] Enter new value for TILEMODE <1>: 1

[10/03/2014 08:12:05] Command: _line

[10/03/2014 08:12:05] Specify first point: 0,0

[10/03/2014 08:12:05] Specify next point or [Undo]: 1,1

[10/03/2014 08:12:05] Specify next point or [Undo]:

[10/03/2014 08:12:05] Command: _tilemode

[10/03/2014 08:12:05] Enter new value for TILEMODE <1>: 0 Regenerating layout.

[10/03/2014 08:12:05] Regenerating model - caching viewports.

[10/03/2014 08:12:05] Command: _save Save drawing as <C:\Users\acesworker\AppData\LocalLow\jobs\9c6f00ec93c1480dba00cd0974b84a46\userdata\Local\template\acad.dwt>: result.dwg

[10/03/2014 08:12:06] Command: _-export Enter file format [Dwf/dwfX/Pdf] <dwfX>_pdf Enter plot area [Current layout/All layouts]<Current Layout>: _all

[10/03/2014 08:12:06] Enter file name <acad-Layout1.pdf>: result.pdf

[10/03/2014 08:12:06] Regenerating layout.

[10/03/2014 08:12:06] Regenerating model.

[10/03/2014 08:12:06] Command:

[10/03/2014 08:12:06] Command: Effective plotting area:  8.04 wide by 10.15 high

[10/03/2014 08:12:06] Effective plotting area:  6.40 wide by 8.40 high

[10/03/2014 08:12:06] Plotting viewport 2.

[10/03/2014 08:12:06] Plotting viewport 1.

[10/03/2014 08:12:06] Command: _quit

[10/03/2014 08:12:06] End AutoCAD Core Console output

[10/03/2014 08:12:06] End script phase.

[10/03/2014 08:12:06] Start upload phase.

[10/03/2014 08:12:06] Start uploading.

[10/03/2014 08:12:06] Target url: https://acesprod-bucket.s3-us-west-1.amazonaws.com/aces-workitem-outputs/9c6f00ec93c1480dba00cd0974b84a46/result.dwg?AWSAccessKeyId=ASIAIURT4LB4UT6AQUQQ&Expires=1412327526&x-amz-security-token=AQoDYXdzEHAa0ANVvX5bcflsH6HOUgkdeZaXsnR523sDP0j%2FwKSG%2B4fXEwLpAQF5oOXaq2s2gOIFFlbY0AeL7K%2BTx%2Bpnr2wyc5LVAgu5YrTZDt01BTS4YL5NYGPHJqZuYrFpX673UomYh1qdhK31l%2BJFzqk1L5NZofkQneY9FUPYQGxkEhGivI4ZCc%2FNqvd250Epc20DaWbAboE2kjLtEp5XkZRmfPR5StaerELbJNDk6ETlZBN4z%2FwSTxR5Yg1lhq%2BbIc27fDroU%2BLWJrkgbJUmQpXAqLDnmoVRR6RUopcWSM0sS8Mecq7iv%2BGhW%2F2udeMT8Ik9xfeVn19xRJ%2BVzww%2FkT6lY8v5AkwSVx3OGNAFPlAmFOPwWEzFrSTQXn9XU9hkE2TQY29wiLRTbL5EjOxV1anrYRnm7UjIOpY0h%2BdQjQO4fer3SAJZWx17Kk%2FF0iGT35n09pGElPqpiwcy%2FoCjNs432TGJXMLq1mOw5KqEUc7CkMF6pPbiJUc5109tsS4SALh%2B5cQhWP0pibYKns1vsxZioA9mEVClsezKsq%2BJRzjUkWbpVbEDz7fCy7ncY0yN0gWCTX5eIWwQdbzg%2BP%2Bv9au44OhJMPpiOu54IUCVNZnY2Du2kEkgayD4krmhBQ%3D%3D&Signature=4lUgaBWg6N8KeNUgpGfSl7Wcoy8%3D

[10/03/2014 08:12:06] End uploading.

[10/03/2014 08:12:06] Start uploading.

[10/03/2014 08:12:06] Target url: https://acesprod-bucket.s3-us-west-1.amazonaws.com/aces-workitem-outputs/9c6f00ec93c1480dba00cd0974b84a46/result.pdf?AWSAccessKeyId=ASIAIURT4LB4UT6AQUQQ&Expires=1412327527&x-amz-security-token=AQoDYXdzEHAa0ANVvX5bcflsH6HOUgkdeZaXsnR523sDP0j%2FwKSG%2B4fXEwLpAQF5oOXaq2s2gOIFFlbY0AeL7K%2BTx%2Bpnr2wyc5LVAgu5YrTZDt01BTS4YL5NYGPHJqZuYrFpX673UomYh1qdhK31l%2BJFzqk1L5NZofkQneY9FUPYQGxkEhGivI4ZCc%2FNqvd250Epc20DaWbAboE2kjLtEp5XkZRmfPR5StaerELbJNDk6ETlZBN4z%2FwSTxR5Yg1lhq%2BbIc27fDroU%2BLWJrkgbJUmQpXAqLDnmoVRR6RUopcWSM0sS8Mecq7iv%2BGhW%2F2udeMT8Ik9xfeVn19xRJ%2BVzww%2FkT6lY8v5AkwSVx3OGNAFPlAmFOPwWEzFrSTQXn9XU9hkE2TQY29wiLRTbL5EjOxV1anrYRnm7UjIOpY0h%2BdQjQO4fer3SAJZWx17Kk%2FF0iGT35n09pGElPqpiwcy%2FoCjNs432TGJXMLq1mOw5KqEUc7CkMF6pPbiJUc5109tsS4SALh%2B5cQhWP0pibYKns1vsxZioA9mEVClsezKsq%2BJRzjUkWbpVbEDz7fCy7ncY0yN0gWCTX5eIWwQdbzg%2BP%2Bv9au44OhJMPpiOu54IUCVNZnY2Du2kEkgayD4krmhBQ%3D%3D&Signature=leR9Gdzg6ggabjNHI6QEWBcPccQ%3D

[10/03/2014 08:12:06] End uploading.

[10/03/2014 08:12:06] End upload phase.

[10/03/2014 08:12:06] Job finished with result Succeeded

This service clearly has a lot of potential, especially for creating applications where you need some kind of DWG processing from an environment that isn’t suited to hosting AutoCAD (such as a mobile app or a web-based configurator that cranks out DWGs).

I would expect a modest cost to be associated with using the service, in due course, so don’t be surprised when that happens. But right now you can give it a try for free and consider how such a service might be used in your applications.

One area that I’ll show in a follow-up post is how to include custom application modules in your activities, so you can have custom commands included in the scripts you execute via the AutoCAD I/O API.

photo credit: dvanzuijlekom via photopin cc

October 22, 2014

A dashboard... but what for?

Here’s a little bit of fun. There’s a new dashboard in the San Rafael office, but what does it show?


Post your guess as a comment: the closest – or most humorous, depending on my mood – will win a free “I♥3D” Autodesk T-shirt (I’ll contact the winner to get their size).

And when I say humorous that doesn’t mean rude or insulting. Please keep it clean & polite, people. :-)

October 20, 2014

VR Hackathon 2014 in SF

It’s been a busy few days. After being in full-day meetings on Thursday and Friday, I headed down with Jim Quanci to the VR Hackathon’s kick-off event on Friday night. It was held at the newly refurbished Gray Area Theater in San Francisco’s Mission district.

The Friday night “mega meetup” was a great way to kick the event off, with presentations from NASA’s JPL on how they teamed up with Sony to develop a prototype VR system to control robots for asteroid mining.

Asteroid mining

There was also an interesting presentation on the evolution of VR tech from Leap Motion’s founder and CTO, David Holz.

David Holz from Leap Motion

Jim and I set up a table – as Autodesk sponsored the event – and over the course of the weekend talked to various people about the Autodesk Viewing & Data API (and about our products and APIs, in general).

Jim talking 3D

To help demo the stereoscopic viewer – and to attract people to ask us to check it out – I had some fun putting together a version that auto-orbits and explodes the contents (once fully loaded). This version is best viewed in a browser, of course, as it doesn’t respond to device tilt.

Our DODOcase and PCs on the last day

I wasn’t sure whether I’d end up participating in a team, or not, but I ended up having so much fun hanging out with Jim and chatting to people that I stuck with that.

There were lots of fun things going on with the various teams…

Lots of medically oriented devices

A team working with Google Cardboard

There were even a few Autodeskers present. Lars Schneider – a member of the Infraworks team in Potsdam, Germany – formed a team with Torsten Becker, a friend of his who was also visiting SF. They’re pictured here with Michael Beale, who has worked on our web-based viewing technology and is currently on the rendering-as-a-service team.

Michael, Torsten and Lars

Aside from answering questions and giving demos, I also spent some time checking out the other sponsor’s technology. Sony’s asteroid mining tech was neat:

Kean in Morpheus

As was the combination of Leap Motion with Oculus Rift:Leap and Oculus

Leap Motion's Oculus Rift demo - keeping balls in the air

Aside from seeing Leap Motion with Oculus Rift, at least one team was using an alpha version of the Android SDK to provide input into a Google Cardboard-based game. Something I intend to do myself, once I manage to get a phone that supports the SDK.

Someone using Google Cardboard with Leap Motion via the Android SDK

Sunday afternoon was all about judging. A number of the more tethered solutions were judged by a roving panel of expert judges…

Judges judging

… but several others ended up being presented on the main stage. Here’s a section of a video of Lars & Torsten’s Oculus Rift + Leap Motion app that I’m very happy to say ended up winning the WebVR category.

Lars and Torsten's Hackathon demo

(Lars tells me they’ll be posting the code soon – I’ll be sure to link to it here.)

Way to go, guys – makes me feel good to see a fellow Autodesker doing so well at this event. :-)

Overall it was a great weekend. There were some really cool projects – such as a Leap Motion-based hand tremor detector, a procedurally-generated game world (which reminded me of Elite) and a CAD-like tool that allows you to tweak the design of a lamp shade by tweaking the position of shadows on the wall. Awesome stuff.

Many thanks to Damon Hernandez and members of the Web 3D consortium for all the hard work. I hope I’ll be able to make it across to the next event!

October 16, 2014

Creating a stereoscopic viewer for Google Cardboard using the Autodesk 360 viewer – Part 3

After introducing the topic, showing a basic stereoscopic viewer using the Autodesk 360 viewer and then adding full-screen and device-tilt navigation, today we’re going to extend our UI to allow viewing of multiple models.

Firstly it’s worth pointing out that for models to be accessible by the viewer that makes use of my client credentials, I also need to upload that content with the same credentials. You can follow the procedure in this previous post to see how you do that, although I believe the ADN team has created some samples that help simplify the process, too.

Once you have the Base64 document IDs for your various models, it’s pretty simple to abstract the code to work on an arbitrary model. The main caveat is that there may be custom behaviours you want for particular models. For instance there are models for which the up direction is the Z-axis rather than the Y-axis (mainly because the translation process isn’t perfect or at least wasn’t when the model was processed)  or for which you may want to save a custom view.

We take care of this in the below code by providing a couple of optional arguments to our launchViewer() function that can be used to specify an up direction and an initial zoom for particular models.

And that’s pretty much all this version of the code does beyond yesterday’s. Here’s the main modified section – you can, of course, just take a look at the complete file.

var viewerLeft, viewerRight;

var updatingLeft = false, updatingRight = false;

var leftLoaded = false, rightLoaded = false, cleanedModel = false;

var leftPos, baseDir, upVector;

var initZoom;


function Commands() { }


Commands.morgan = function () {



    new THREE.Vector3(0, 0, 1),

    function () {



        -48722.5, -54872, 44704.8,

        10467.3, 1751.8, 1462.8






Commands.robot_arm = function () {






Commands.chassis = function () {






Commands.front_loader = function () {



    new THREE.Vector3(0, 0, 1)




Commands.suspension = function () {






Commands.V8_engine = function () {






function initialize() {


  // Populate our initial UI with a set of buttons, one for each

  // function in the Commands object


  var panel = document.getElementById('control');

  for (var fn in Commands) {

    var button = document.createElement('div');



    // Replace any underscores with spaces before setting the

    // visible name


    button.innerHTML = fn.replace('_', ' ');

    button.onclick = (function (fn) {

      return function () { fn(); };



    // Add the button with a space under it







function launchViewer(docId, upVec, zoomFunc) {


  // Assume the default "world up vector" of the Y-axis

  // (only atypical models such as Morgan and Front Loader require

  // the Z-axis to be set as up)


  upVec =

    typeof upVec !== 'undefined' ?

      upVec :

      new THREE.Vector3(0, 1, 0);


  // Ask for the page to be fullscreen

  // (can only happen in a function called from a

  // button-click handler or some other UI event)




  // Hide the controls that brought us here


  var controls = document.getElementById('control');

  controls.style.visibility = 'hidden';


  // Bring the layer with the viewers to the front

  // (important so they also receive any UI events)


  var layer1 = document.getElementById('layer1');

  var layer2 = document.getElementById('layer2');

  layer1.style.zIndex = 1;

  layer2.style.zIndex = 2;


  // Store the up vector in a global for later use


  upVector = new THREE.Vector3().copy(upVec);


  // The same for the optional Initial Zoom function


  if (zoomFunc)

    initZoom = zoomFunc;


  // Get our access token from the internal web-service API


  $.get('http://' + window.location.host + '/api/token',

    function (accessToken) {


      // Specify our options, including the provided document ID


      var options = {};

      options.env = 'AutodeskProduction';

      options.accessToken = accessToken;

      options.document = docId;


      // Create and initialize our two 3D viewers


      var elem = document.getElementById('viewLeft');

      viewerLeft = new Autodesk.Viewing.Viewer3D(elem, {});


      Autodesk.Viewing.Initializer(options, function () {


        loadDocument(viewerLeft, options.document);



      elem = document.getElementById('viewRight');

      viewerRight = new Autodesk.Viewing.Viewer3D(elem, {});


      Autodesk.Viewing.Initializer(options, function () {


        loadDocument(viewerRight, options.document);





When you launch the HTML page it looks a bit different from last time, but only in the fact there’s now a choice of models to select from.

Here’s a slightly faked view of the UI on a mobile device (I’ve combined two screenshots to get the full UI on one screen):

The choice of models

We’ve seen plenty of the Morgan model, but here’s a quick taste of the others. There isn’t currently a back button in the UI, so you’ll have to reload the page to switch between models.

Robot Arm

Front Loader


V8 Engine

I haven’t included the “Chassis” model, here: for some reason this looks great on my PC but is all black on my Android device. I’m not sure why, but I’ve nonetheless left it in the model list, for now.

I’ve now arrived in San Francisco and have been finally able to test with DODOcase’s Google Cardboard viewer. And it looks really good! I was expecting to have to tweak the camera offset, but that seems to be fine. I was also concerned I’d need to put a spherical warp on each viewer to compensate for lens distortion, but honestly that seems unnecessary, too. Probably because we’re dealing with a central object view rather than walking through a scene.

I have to admit to finding the experience quite compelling. If you’re coming to AU or to the upcoming DevDays tour then you’ll be able to see for yourself there. Assuming you don’t want to buy or build your own and try it in the meantime, of course. :-)

October 15, 2014

Creating a stereoscopic viewer for Google Cardboard using the Autodesk 360 viewer – Part 2

I’m heading out the door in a few minutes to take the train to Zurich and a (thankfully direct) flight from there to San Francisco. I’ll have time on the flight to write up the next part in the series, so all will be in place for this weekend’s VR Hackathon.

In today’s post we’re going to extend the implementation we saw yesterday (and introduced on Monday) by adding full-screen viewing and device-tilt navigation.

Full-screen mode is easy: I borrowed some code from here that works well, the only thing to keep in mind is that the API can only be called in a UI event handler (such as when someone has pressed a button). This is clearly intended to stop naughty pages from forcing you into full-screen mode on load. So we’re adding a single, huge “Start” button to launch the viewer. Nothing particularly interesting, although we do hide – and change the Z-order on – some divs to make an apparently multi-page UI happen via a single HTML file. We’ll extend this approach in tomorrow’s post to show more buttons, one for each hosted model.

Device-tilt support is only a little more involved: the window has a ‘deviceorientation’ event we can listen to that gives us alpha/beta/gamma values representing data coming from the host device’s sensors (presumably the accelerometer and magnetometer). These appear to be given irrespective of the actual orientation (meaning whether it’s in portrait or landscape mode). We’re only interested in landscape mode, so we need to look at the alpha value for the horizontal (left-right) rotation and gamma for the vertical (front-back) rotation. The vertical rotation can be absolute, but we want to fix the left-right rotation based on an initial direction – horizontal rotations after that should be relative to that initial direction.

The HTML page hasn’t changed substantially – it has some additional styles, but that’s about it.

Here are the relevant additions to the referenced JavaScript file (I’ve omitted the UI changes and the event handler subscription – you can get the full source here).

function orb(e) {


  if (e.alpha && e.gamma) {


    // Remove our handlers watching for camera updates,

    // as we'll make any changes manually

    // (we won't actually bother adding them back, afterwards,

    // as this means we're in mobile mode and probably inside

    // a Google Cardboard holder)




    // Our base direction allows us to make relative horizontal

    // rotations when we rotate left & right


    if (!baseDir)

      baseDir = e.alpha;


    if (viewerLeft.running && viewerRight.running) {


      var deg2rad = Math.PI / 180;


      // gamma is the front-to-back in degrees (with

      // this screen orientation) with +90/-90 being

      // vertical and negative numbers being 'downwards'

      // with positive being 'upwards'


      var vert = (e.gamma + (e.gamma <= 0 ? 90 : -90)) * deg2rad;


      // alpha is the compass direction the device is

      // facing in degrees. This equates to the

      // left - right rotation in landscape

      // orientation (with 0-360 degrees)


      var horiz = (e.alpha - baseDir) * deg2rad;


      orbitViews(vert, horiz);





function orbitViews(vert, horiz) {


  // We'll rotate our position based on the initial position

  // and the target will stay the same


  var pos = new THREE.Vector3().copy(leftPos);

  var trg = viewerLeft.navigation.getTarget();


  // Start by applying the left/right orbit

  // (we need to check the up/down value, though)


  var zAxis = new THREE.Vector3(0, 0, 1);

  pos.applyAxisAngle(zAxis, (vert < 0 ? horiz + Math.PI : horiz));


  // Now add the up/down rotation


  var axis = new THREE.Vector3().subVectors(pos, trg).normalize();


  pos.applyAxisAngle(axis, vert);


  // Zoom in with the lefthand view


  zoom(viewerLeft, pos.x, pos.y, pos.z, trg.x, trg.y, trg.z);


  // Get a camera slightly to the right


  var pos2 = offsetCameraPos(viewerLeft, pos, trg, true);


  // And zoom in with that on the righthand view, too


  var up = viewerLeft.navigation.getCameraUpVector();




    pos2.x, pos2.y, pos2.z,

    trg.x, trg.y, trg.z,

    up.x, up.y, up.z



So how can we test this? Obviously with a physical device – and I recommend using Chrome on an Android device for best results – or you can choose to use Google Chrome Canary on your PC (whether Mac or Windows). Canary is the codename for the next version of Chrome that’s currently in Beta: I don’t actually know whether the next release is always called Canary, or whether this changes. As you can probably tell, this is the first time I’ve installed it. :-)

Canary currently includes some very helpful developer tools that go beyond what’s in the current stable release of Chrome (which at the time of writing is version 38.0.2125.101 for me, at least). The version of Chrome Canary I have installed is version 40.0.2185.0.

Here’s the main page loaded in Chrome Canary with the enhanced developer tools showing:

Our page in Chrome Canary

The important part is the bottom-right pane which includes sensor emulation information. For more information on enabling this (which you do via the blue “mobile device” icon at the top, next to the search icon) check the online Chrome developer docs.

You can either enter absolute values – which is in itself very handy – or grab onto the device and wiggle it around (which helps emulate more realistic device usage, I expect).Canary device-tilt

Again, here’s the page for you to try yourself.

In tomorrow’s post we’ll extend this implementation to look at other models, refactoring some of the UI and viewer control code in the process.

October 14, 2014

Creating a stereoscopic viewer for Google Cardboard using the Autodesk 360 viewer – Part 1

After yesterday’s introduction to this series of posts, today we’re going to dive into some specifics, implementing a basic, web-based, stereoscopic viewer.

While this series of posts is really about using Google Cardboard to view Autodesk 360 models in 3D (an interesting topic, I hope you’ll agree ;-), it’s also about how easily you can use the Autodesk 360 viewer to power Google Cardboard: we’ll see it’s a straightforward way to get 3D content into a visualization system that’s really all about 3D.

Let’s start with some basics. We clearly need two views in our web-page, one for each eye. For now we’re not going to worry about making the page full-screen – which basically means hiding the address bar – as we’ll address that when we integrate device-tilt navigation tomorrow. But the web-page will fill the screen estate that we have, of course.

Our basic stereoscopic 3D viewer

The Autodesk 360 viewer doesn’t currently support multiple viewports on a single scene – even if this is a capability that Three.js provides – so for now we’re going to embed two separate instances of the Autodesk 360 viewer. At some point the viewer will hopefully provide viewporting capability – and allow us to reduce the app’s network usage and memory footprint – but we’ll see over the coming posts that even with two separate viewer instances the app performs well.

In this post and the next we’re going to make use of the Morgan model that we saw “steampunked” using Fusion 360 and then integrated into my first Autodesk 360 application. Basically because it’s the model that’s content that can already be accessed by this particular site. On Thursday we’ll extend that to be able to choose from a selection of models.

The lighting used for this model is different from in the previous sample: “simple grey” works better on mobile devices that “riverbank”, it seems (which has much more going on in terms of lights and environment backgrounds, etc.).

I’m looking at this viewer as an “object viewer”, which allows us to spin the camera around a fixed point of interest and view it from different angles, rather than a “walk-/fly-through viewer”. This is a choice, of course: you could easily take the foundation shown in this series and make a viewer that’s better-suited for viewing an architectural model from the inside, for instance.

OK, before we go much further, I should probably add this caveat: I don’t actually yet have a Google Cardboard device in my possession. I have a Nexus 4 phone – which has Android 4.4.4 and can run the native Google Cardboard app as well as host WebGL for a web-based viewer implementation – but I don’t actually have the lenses, etc. I have a DODOcase VR Cardboard Toolkit waiting for me in San Francisco, but until now I haven’t tested to see whether the stereoscopic effect works or not. I’ve squinted at the screen from close up, of course, but haven’t yet seen anything jump out in 3D. That said, Jim Quanci assures me it looks great with the proper case, so I’m fairly sure I’m not wasting everyone’s time with these posts.

The main “known unknown” until I test firsthand has been the distance to be used between the two camera positions. Three.js allows us to translate a camera in the X direction (relative to its viewing direction along Z, which basically means pan left or right) very easily, but I’ve had to guess a little with the distance. For now I’ve taken 4% of the distance between the camera and the target – as this gives a very slight difference between the views for various models I tried – but this value may need some tweaking.

Beyond working out the camera positions of the two views, the main work is about keeping them in sync: if the lefthand view changes then the righthand view should adjust to keep the stereo effect and vice-versa. In my first implementation I used a number of HTML5 events to do this: click, mouseup, mousemove, touchstart, touchend, touchcancel, touchleave & touchmove. And then I realised that there was no simple way to hook into zoom, which drove me crazy for a while. Argh. But then I realised I could hook into the viewer’s cameraChanged event, instead, which was much better (although this gets called for any change in the viewer, and you also need to make sure you don’t get into some circular modifications, leading to your model disappearing into the distance… :-).

Here’s an animated GIF of the views being synchronised successfully between the two embedded viewers inside a desktop browser:

Stereo Morgan

Now for some code… here’s the HTML page (which I’ve named stereo-basic.html) for the simple, stereoscopic viewer. I’ve embedded the styles but have kept the JavaScript in a separate file for easier debugging.

<!DOCTYPE html>



    <meta charset="utf-8">

    <title>Basic Stereoscopic Viewer</title>

    <link rel="shortcut icon" type="image/x-icon" href="/favicon.ico?v=2">




        "width=device-width, minimum-scale=1.0, maximum-scale=1.0" />

    <meta charset="utf-8">









    <script src="js/jquery.js"></script>

    <script src="js/stereo-basic.js"></script>


      body {

        margin: 0px;

        overflow: hidden;




  <body onload="initialize();" oncontextmenu="return false;">

    <table width="100%" height="100%">


        <td width="50%">

          <div id="viewLeft" style="width:50%; height:100%;"></div>


        <td width="50%">

          <div id="viewRight" style="width:50%; height:100%;"></div>






And here’s the referenced JavaScript file:

var viewerLeft, viewerRight;

var updatingLeft = false, updatingRight = false;

var leftLoaded = false, rightLoaded = false, cleanedModel = false;


function initialize() {


  // Get our access token from the internal web-service API


  $.get('http://' + window.location.host + '/api/token',

    function (accessToken) {


      // Specify our options, including the document ID


      var options = {};

      options.env = 'AutodeskProduction';

      options.accessToken = accessToken;

      options.document =



      // Create and initialize our two 3D viewers


      var elem = document.getElementById('viewLeft');

      viewerLeft = new Autodesk.Viewing.Viewer3D(elem, {});


      Autodesk.Viewing.Initializer(options, function () {


        loadDocument(viewerLeft, options.document);



      elem = document.getElementById('viewRight');

      viewerRight = new Autodesk.Viewing.Viewer3D(elem, {});


      Autodesk.Viewing.Initializer(options, function () {


        loadDocument(viewerRight, options.document);






function loadDocument(viewer, docId) {


  // The viewer defaults to the full width of the container,

  // so we need to set that to 50% to get side-by-side


  viewer.container.style.width = '50%';



  // Let's zoom in and out of the pivot - the screen

  // real estate is fairly limited - and reverse the

  // zoom direction





  if (docId.substring(0, 4) !== 'urn:')

    docId = 'urn:' + docId;



    function (document) {


      // Boilerplate code to load the contents


      var geometryItems = [];


      if (geometryItems.length == 0) {

        geometryItems =



            { 'type': 'geometry', 'role': '3d' },




      if (geometryItems.length > 0) {




      // Add our custom progress listener and set the loaded

      // flags to false


      viewer.addEventListener('progress', progressListener);

      leftLoaded = rightLoaded = false;


    function (errorMsg, httpErrorCode) {

      var container = document.getElementById('viewerLeft');

      if (container) {

        alert('Load error ' + errorMsg);






// Progress listener to set the view once the data has started

// loading properly (we get a 5% notification early on that we

// need to ignore - it comes too soon)


function progressListener(e) {


  // If we haven't cleaned this model's materials and set the view

  // and both viewers are sufficiently ready, then go ahead


  if (!cleanedModel &&

    ((e.percent > 0.1 && e.percent < 5) || e.percent > 5)) {


    if (e.target.clientContainer.id === 'viewLeft')

      leftLoaded = true;

    else if (e.target.clientContainer.id === 'viewRight')

      rightLoaded = true;


    if (leftLoaded && rightLoaded && !cleanedModel) {


      // Iterate the materials to change any red ones to grey


      for (var p in viewerLeft.impl.matman().materials) {

        var m = viewerLeft.impl.matman().materials[p];

        if (m.color.r >= 0.5 && m.color.g == 0 && m.color.b == 0) {

          m.color.r = m.color.g = m.color.b = 0.5;

          m.needsUpdate = true;



      for (var p in viewerRight.impl.matman().materials) {

        var m = viewerRight.impl.matman().materials[p];

        if (m.color.r >= 0.5 && m.color.g == 0 && m.color.b == 0) {

          m.color.r = m.color.g = m.color.b = 0.5;

          m.needsUpdate = true;




      // Zoom to the overall view initially



      setTimeout(function () { transferCameras(true); }, 0);


      cleanedModel = true;



  else if (cleanedModel && e.percent > 10) {


    // If we have already cleaned and are even further loaded,

    // remove the progress listeners from the two viewers and

    // watch the cameras for updates








// Add and remove the pre-viewer event handlers


function watchCameras() {

  viewerLeft.addEventListener('cameraChanged', left2right);

  viewerRight.addEventListener('cameraChanged', right2left);



function unwatchCameras() {

  viewerLeft.removeEventListener('cameraChanged', left2right);

  viewerRight.removeEventListener('cameraChanged', right2left);



function unwatchProgress() {

  viewerLeft.removeEventListener('progress', progressListener);

  viewerRight.removeEventListener('progress', progressListener);



// Event handlers for the cameraChanged events


function left2right() {

  if (!updatingRight) {

    updatingLeft = true;


    setTimeout(function () { updatingLeft = false; }, 500);




function right2left() {

  if (!updatingLeft) {

    updatingRight = true;


    setTimeout(function () { updatingRight = false; }, 500);




function transferCameras(leftToRight) {


  // The direction argument dictates the source and target


  var source = leftToRight ? viewerLeft : viewerRight;

  var target = leftToRight ? viewerRight : viewerLeft;


  var pos = source.navigation.getPosition();

  var trg = source.navigation.getTarget();


  // Set the up vector manually for both cameras


  var upVector = new THREE.Vector3(0, 0, 1);




  // Get the new position for the target camera


  var up = source.navigation.getCameraUpVector();


  // Get the position of the target camera


  var newPos = offsetCameraPos(source, pos, trg, leftToRight);


  // Save the left-hand camera position: device tilt orbits

  // will be relative to this point


  leftPos = leftToRight ? pos : newPos;


  // Zoom to the new camera position in the target



    target, newPos.x, newPos.y, newPos.z, trg.x, trg.y, trg.z,

    up.x, up.y, up.z




function offsetCameraPos(source, pos, trg, leftToRight) {


  // Get the distance from the camera to the target


  var xd = pos.x - trg.x;

  var yd = pos.y - trg.y;

  var zd = pos.z - trg.z;

  var dist = Math.sqrt(xd * xd + yd * yd + zd * zd);


  // Use a small fraction of this distance for the camera offset


  var disp = dist * 0.04;


  // Clone the camera and return its X translated position


  var clone = source.autocamCamera.clone();

  clone.translateX(leftToRight ? disp : -disp);

  return clone.position;



// Model-specific helper to zoom into a specific part of the model


function zoomEntirety(viewer) {

  zoom(viewer, -48722.5, -54872, 44704.8, 10467.3, 1751.8, 1462.8);



// Set the camera based on a position, target and optional up vector


function zoom(viewer, px, py, pz, tx, ty, tz, ux, uy, uz) {


  // Make sure our up vector is correct for this model


  var upVector = new THREE.Vector3(0, 0, 1);

  viewer.navigation.setWorldUpVector(upVector, true);


  var up =

    (ux && uy && uz) ? new THREE.Vector3(ux, uy, uz) : upVector;



    new THREE.Vector3(px, py, pz),

    new THREE.Vector3(tx, ty, tz)




To host something similar yourself, I recommend starting with the post I linked to earlier and building it up from there (you basically need to provide the ‘/api/token’ server API – using your own client credentials – for this to work).

But you don’t need to build it yourself – or even have an Android device – to give this a try. Simply load the HTML page in your preferred WebGL-capable browser (Chrome is probably safest, considering that’s what I’ve been using when developing this) and have a play.

On a PC it will respond to mouse or touch navigation, of course, but in tomorrow’s post we’ll implement a much more interesting – at least with respect to Google Cardboard, where you can’t get your fingers near the screen to navigate – tilt-based navigation mechanism. We’ll also take a look at how we can use Google Chrome Canary to emulate device-tilt on a PC, reducing the need to jump through the various hoops needed to debug remotely. Interesting stuff. :-)

October 13, 2014

Gearing up for the VR Hackathon

I’m heading back across to the Bay Area on Wednesday for 10 days. There seems to be a pattern forming to my trips across: I’ll spend the first few days in San Francisco – in this case attending internal strategy meetings in our 1 Market office – and then head up after the weekend to San Rafael to work with the members of the AutoCAD engineering team based up there. I’ll still probably head back into SF for the odd day, the following week, but that’s fine: I really like commuting by ferry from Larkspur to the Embarcadero.

The weekend I’m spending in the Bay Area is looking to have a slightly different shape this time, though. Rather than just catching up with old friends (which I still hope to do), I’ve signed up for the VR Hackathon, an event that looks really interesting. I was happy to find out about this one and that it fell exactly during my stay. I’ve even roped a few colleagues into coming along, too.

VR Hackathon

Looking at the “challenges” posted for the hackathon, it seemed worth taking a look at web and mobile VR, as these look like the two that I’m most likely to be able to contribute towards. Which led me to reaching out to Jim Quanci and Cyrille Fauvel, over in the ADN team, to see what’s been happening with respect to VR platforms such as Oculus Rift and Google Cardboard.

It turns out the ADN team has invested in a few Oculus Rift Developer Kits, but was looking for someone to spend some time fooling around with integrating the new WebGL-based Autodesk 360 viewer with Google Cardboard. And as “fooling around” is my middle name, I signed up enthusiastically. :-)

For those of you who haven’t been following the VR space, lately, I think it’s fair to say that Facebook put the cat amongst the pigeons when they acquired Oculus. Google’s competitive response was very interesting: at this year’s Google I/O they announced Google Cardboard, a simple View-Master-like mount for a smartphone that can be used for AR or VR.


A few notes about the design: there are two lenses that focus the smartphone’s display – which is split in half in landscape mode, with one half for each eye – and there’s a simple magnet-based button on the left as well as an embedded NFC tag to tell the phone when to launch the Cardboard software. The rear camera has also been left clear in case you need its input for a “reality feed” in the case of AR or perhaps some additional information to help with VR.

Aside from the smartphone, the whole package can be made for a few dollars (assuming a certain economy of scale, of course) with the provided instructions. Right now you can pick them up pre-assembled for anywhere between $15 and $30 – still cheap for the capabilities provided. Which has led to the somewhat inevitable nickname of “Oculus Thrift”. :-)

The point Google is making, of course, is that you don’t need expensive, complex kit to do VR: today’s smartphones have a lot of the capabilities needed, in terms of processing power, sensors and responsive, high-resolution displays.

When looking into the possibilities for supporting Cardboard from a software perspective, there seem to be two main options: the first is to create a native Android app using their SDK, the second is to create a web-app such as those available on the Chrome Experiments site.

Given the web-based nature of the Autodesk 360 viewer, it seemed to make sense to follow the latter path. Jim and Cyrille kindly pointed me at an existing integration of Cardboard with Three.js/WebGL, which turned out to be really useful. But we’ll look at some specifics more closely in the next post.

During the rest of the week – and I expect to post each day until Thursday, at least, so check back often – I’ll cover the following topics:

  1. Creating a stereoscopic viewer for Google Cardboard using the Autodesk 360 viewer
  2. Adding tilt support for model navigation and enabling fullscreen mode
  3. Supporting multiple models

If I manage to get my hands on the pre-release Leap Motion SDK for Android then I’ll try to integrate that, too, at some point. Mounting a Leap Motion controller to the back of the goggles allows you to use hand gestures for additional (valuable) input in a VR environment… I’m thinking this may end up being the “killer app” for Leap Motion (not mine specifically, but VR in general).

Until tomorrow!

October 09, 2014

New Memento build and webinar

As reported over on Scott’s blog, Project Memento v1.0.10.5 is now available on Autodesk Labs. I won’t repeat the specific new features in this release – Scott covers those thoroughly – but I will say that I’m personally most excited about trying the improved .OBJ and .FBX export and the workflows that they enable.

Project Memento

To find out more about Memento, there’s a webinar on Wednesday October 15 at 9am Pacific talking about the tool. During the webinar, Tatjana Dzambazova – whom you may have seen in her excellent TEDx session – will cover topics such from uploading photos, working with highly detailed meshes and 3D printing the results.

recap.autodesk.comAnd in somewhat related news, the ReCap website has received a welcome refresh. Head on over and check it out!

October 08, 2014

Autodesk software is free for students, teachers and schools (yes, really)

I mentioned this initiative a few months ago, but it turns it hadn’t been rolled out everywhere: there were regional exceptions meaning that students in certain countries weren’t eligible for the program at that point. So my apologies if it sounds like I’m repeating myself, but at least it’s good news that I’m announcing twice. :-)students.autodesk.com

The last kinks have been ironed out of the program, so now students, teachers and schools anywhere in the world can now download and use the following Autodesk software for freeFree software list

So if you’re a student who was expecting to be able to get free Autodesk software tools based on my previous post but it didn’t work out, check again on students.autodesk.com – this time it will!

October 06, 2014

Update on Spark, Autodesk’s 3D printing platform

There’s been a lot in the news about Spark – Autodesk’s entry into the 3D printing market – of late. Earlier in the year we announced this open platform and a reference design for it, but in the last few weeks things have become even more interesting: specific examples of partnerships with companies who are building their own printers based on Spark have started to emerge. I thought it worth aggregating a few of the more interesting articles for those who might have missed them.

I’m personally really interested in the approach Autodesk is taking here. It seems to me that the “additive manufacturing” space is currently dominated by vendors trying to monetize both the upfront hardware investment and the consumables, which are often proprietary (i.e. the razor and the blades). And they’re providing software that’s really an afterthought rather than being considered of prime importance to the customer.

Opening up the platform to people wanting to drive innovation in materials and/or software should have a positive impact on the industry. And presumably be a good thing for users connecting Autodesk design tools with Spark-powered devices, of course.

Autodesk's 1st 3D printerHere’s an interesting interview where Autodesk’s CTO, Jeff Kowalski, provides some useful background information, including how the Spark platform and the coming Autodesk-branded 3D printers are analogous to Android and Google’s Nexus devices, respectively. And those who have managed to get their hands on the first Spark-based DLP printer are suitably impressed.

As an example of the type of innovation that could conceivably end up in the Spark software platform (I have no idea whether it’s part of the plan or not, mind), check out an Autodesk Research project announced at this week’s UIST (User Interface Software and Technology) Syposium:

PipeDream allows you to create internal pipes and tubes in your 3D-printed models as conduits for wires or for air leading to sensors or even actuators providing haptic feedback.

Local Motors' Strati 
Local Motors was the first to announce a partnership with Autodesk, incorporating Spark into the process for creating the Strati, the first ever 3D-printed car.

Dremel's 3D Idea Builder

A household name in handheld tool systems, Dremel then announced their own 3D printer based on Spark (this one based on FDM).

3DPrintshow's 2014 Brand of the Year

It’s clearly been an interesting few months since Autodesk announced this new focus on 3D printing back in May. In recognition of this – and I have to admit to finding this pretty incredible, personally – 3D Printshow named Autodesk as their 2014 Brand of the Year.

Spark blog

If you find this kind of news interesting, be sure to check this new blog dedicated to Spark on a regular basis – or simply follow the Spark Twitter account. Developments are coming thick and fast!

photo credit: automobileitalia via photopin cc


10 Random Posts