Through the Interface: Using SLAM-based Augmented Reality to visualize 3D geometry

May 2015

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30


« Inaugural Zurich .NET Developer meet-up | Main | Maintaining per-object XData in AutoCAD using .NET »

February 11, 2013

Using SLAM-based Augmented Reality to visualize 3D geometry

I first became aware of the work being done by 13th Lab a couple of years ago, but just last week someone pinged me about it again and re-triggered my interest (thanks, Jim :-).

13th Lab is a small Swedish company that has created some really interesting Augmented Reality technology. Many AR systems make use of fiduciary markers (which often look like sections of QR codes) to make it easier to determine where the 3D content should be positioned and visualized in the 2D image of the scene being fed from your device’s camera.

Ideally, though, you want a markerless AR system. 13th Lab have created just that, making use of a technique borrowed from the world of robotics called simultaneous localization and mapping (SLAM).

I’m far from being an expert in AR, but I thought I’d have a play around with some technology that 13th Lab have recently released into Beta (both of which are “FREE (even for commercial use)”). They’ve created two products: an SDK (currently for iOS and Unity 3D – apparently Android support is on its way) – which I decided not to spend time on – and a browser you run on an iOS device that loads custom web-pages.

The PointCloud Browser can be installed from the App Store and pointed at one of the samples that show you how to use the JavaScript APIs to do something simple such as add basic 3D primitives or go much further and implement an AR-based game. Using one of these samples as a base, I created my own web-page that pulls down 3D fractal data from the web-service I implemented as part of last year’s the Cloud & Mobile series (if you try to load this page in a standard browser you’ll get uninteresting results, by the way).

Once loaded, you can point your camera at a flat surface with some helpful detail (SLAM systems are presumably greatly helped by the existence of unique visual detail). I chose the whiteboard in our kitchen:

Choosing a flat surface

When you tap the screen, the browser then uses computer vision techniques to capture information about the surface:

Capturing the surface

At which point you should first see your 3D geometry:

View of our data from one angle

As you move the device around, you see difference views on the geometry, of course. Here’s a close up:

A closer view

And here’s a view from the other side of the whiteboard:

And from another angle

The results aren’t yet quite as I want them, even if they prove the concept: the spheres currently all have the same radius, for instance, as that gets set at the mesh level (and mesh objects are ideally shared by multiple nodes). Some work will be needed either to find a way to set this per instance or to establish an efficient way of generating mesh objects for unique size/colour combinations.

Here’s the HTML and JavaScript code that integrates this data from the web-service:

<!DOCTYPE html>






      content="user-scalable=no, width=device-width, initial-scale=1.0, maximum-scale=1.0"/>

    <meta name="viper-init-options" content="manual"/>



      rel="viper-app-icon" type="images/png"



      rel="stylesheet" href="./css/common.css" type="text/css"







    <script type="text/javascript" src="./js/common.js"></script>


    <script type="text/xml" id="library">



          id="sphere_mesh1" radius="0.03" primitive="sphere"

          color="0,0,0,1" details="3" />


          id="sphere_mesh2" radius="0.03" primitive="sphere"

          color="0.5,0,0,1" details="3" />


          id="sphere_mesh3" radius="0.03" primitive="sphere"

          color="0.5,0.5,0,1" details="3" />


          id="sphere_mesh4" radius="0.03" primitive="sphere"

          color="0,0.5,0,1" details="3" />


          id="sphere_mesh5" radius="0.03" primitive="sphere"

          color="0,0.5,0.5,1" details="3" />


          id="sphere_mesh6" radius="0.03" primitive="sphere"

          color="0,0,0.5,1" details="3" />


          id="sphere_mesh7" radius="0.03" primitive="sphere"

          color="0.5,0,0.5,1" details="3" />


          id="sphere_mesh8" radius="0.03" primitive="sphere"

          color="0.9,0.9,0.9,1" details="3" />


          id="sphere_mesh9" radius="0.03" primitive="sphere"

          color="0.6,0.6,0.6,1" details="3" />


          id="sphere_mesh10" radius="0.03" primitive="sphere"

          color="0.3,0.3,0.3,1" details="3" />


          id="sphere_mesh11" radius="0.03" primitive="sphere"

          color="1,1,1,1" details="3" />


          id="sphere_mesh12" radius="0.03" primitive="sphere"

          color="1,1,1,1" details="3" />


        <node id="masterSphere1" static="true">

          <model id="sphere_model" mesh="sphere_mesh1"/>


        <node id="masterSphere2" static="true">

          <model id="sphere_model" mesh="sphere_mesh2"/>


        <node id="masterSphere3" static="true">

          <model id="sphere_model" mesh="sphere_mesh3"/>


        <node id="masterSphere4" static="true">

          <model id="sphere_model" mesh="sphere_mesh4"/>


        <node id="masterSphere5" static="true">

          <model id="sphere_model" mesh="sphere_mesh5"/>


        <node id="masterSphere6" static="true">

          <model id="sphere_model" mesh="sphere_mesh6"/>


        <node id="masterSphere7" static="true">

          <model id="sphere_model" mesh="sphere_mesh7"/>


        <node id="masterSphere8" static="true">

          <model id="sphere_model" mesh="sphere_mesh8"/>


        <node id="masterSphere9" static="true">

          <model id="sphere_model" mesh="sphere_mesh9"/>


        <node id="masterSphere10" static="true">

          <model id="sphere_model" mesh="sphere_mesh10"/>


        <node id="masterSphere11" static="true">

          <model id="sphere_model" mesh="sphere_mesh11"/>


        <node id="masterSphere12" static="true">

          <model id="sphere_model" mesh="sphere_mesh12"/>





    <script type="text/xml" id="scene">

      <scene base="relative-baseplane">

        <light id="main_light"



          ambient="0.2, 0.2, 0.2, 0.2"

          diffuse="1.0, 1.0, 1.0, 1.0"

          specular="1.0, 1.2, 1.2, 1.0"

          position="3, 0.5, 2, 0"/>




    <script type="text/javascript">

      function startup() {





      * This function is called when the web app is fully loaded

      * (e.g. sounds, textures, image descriptors)

      * and we're completely ready to go


      function onAppLoaded() {



        var nodeListener = new viper.NodeListener();



          function(node, data) {

            var pos = data.position.getTranslation();


              "Received camera pos update: " +

              pos.getX() + "," + pos.getY() + "," + pos.getZ()







       * This function is called when the Viper JavaScript API is

       * ready for use


      function onViperReady() {



        var scene = viper.getScene();

        populateWithLevel(scene, 5);


        // Create an observer. This observer contains the callback

        // functions that may be called from the engine layer.

        // We only need to add the functions that we are interested

        // in.


        var observer = {


          * Called when the user clicked cancel in the map creation

          * view


          onMapCreationCancelled: function () {





        // Attach the observer to viper





      function populateWithLevel(scene, level) {


        viper.log("Populate with level: " + level);


        // Make sure CORS is enabled

  = true;


        // Call our web-service with the appropriate level





              '' +


            crossDomain: true,

            data: {},

            dataType: "json",

            error: function (err) {



            success: function (data) {


              viper.log("Successfully called web service.");


              // Process each sphere, adding it to the scene




                function (i, item) {


                  // Get shortcuts to our JSON data


                  viper.log("Processing item " + i);


                  var x = item.X, y = item.Y, z = item.Z,

                      rad = item.R, level = item.L;


                  var length = Math.sqrt(x * x + y * y + z * z);


                  // Only add spheres near the edge of the outer one


                  if (length + rad > 0.99) {


                    // Create a spherical node


                    var nodeID = "sphere_" + i;

                    var position =

                      new viper.math.Vector(x/3, y/3, z/3);

                    var sphere = new viper.Node(nodeID, position);

                    sphere.setPrototype("masterSphere" + level);














If you want to give this a try yourself, install the PointCloud Browser and enter “” in the address bar: this will re-direct to the longer URL on my blog.

Once I’ve found a way to adjust the radii per instance, I think I’m going to investigate exporting simple 3D geometry from AutoCAD, to see what’s possible (the browser does support .OBJ for more complex objects – I’ll be looking at fairly simple stuff).

blog comments powered by Disqus


10 Random Posts