Through the Interface

May 2015

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31            



Twitter





April 29, 2015

Puzzling over laser cutters

After introducing a rudimentary program to help build jigsaw puzzles, we needed to fabricate the design in some way. As mentioned originally, we quickly discarded the idea to 3D print it as too time-consuming and unreliable: we decided to at least investigate using a laser cutter, what I considered to be a more natural choice for this.

Now I did have some experience of using the laser cutter at our local Fab Lab, but as we wanted to iterate quickly – and didn’t have the fixed requirement of the larger cutting bed – we opted to use a slightly smaller laser cutter we bought for the Neuchâtel office last summer. This one uses software called LaserCut 5.3 to drive the cutting or engraving process.

I hadn’t previously used this device, myself, but Nathan Moore, a colleague in our Enterprise Support team, kindly helped with his expertise. And we both ended up learning quite a bit in the process, which I thought was worth sharing in this post.

Nate at work

For our first attempt, we decided to cut cardboard. I wasn’t sure what exactly we’d learn from this – other than getting a sense of the size of the puzzle – but we ended up learning something very valuable… cutting order matters.

We were passing the various cutting curves through arbitrarily, so the cuts were made in the order they exist in the drawing (presumably). Which meant that some pieces – and sets of pieces – were being “closed” too soon. And given the nature of the cutting bed – which is basically open, it’s a number of vertical sheets of metal, with gaps between – pieces would just fall when cut completely:

Cardboard cut-outs

So we needed to influence the cutting order, in some way.

The cutting software supports layers assigned in the DXF files it imports (although these actually need to be assigned unique colours – so strictly speaking it creates its own categorization based on colours) and you can assign them different properties, such as beam power and head speed. But the order of these layers also indicates the order of cutting, which ends up being a very useful attribute.

So we modified the drawing to have one layer for the radial lines, which we would cut first (or at least after any text engraving we wanted to perform), and then differently-coloured layers for the various concentric curves, so they would cut from the centre outwards. This worked really well, although we still suffered a bit when having to reassemble the unmarked pieces into a complete puzzle, at the end.

Our colourful puzzle indicating cutting order

We also played a fair amount with power and speed settings, finding out that speed matters greatly. For 3mm MDF – which we ended up using for the “final” version – we kept the power at 100%, whether engraving text or cutting the board, and varied the speed. We found that anything below 10 (higher being quicker) would result in the board being cut cleanly. A speed of 10 was better than 5 as it resulted in a smaller cut (and a nice, snug fit for the jigsaw pieces). We used a much quicker head speed – 50, I think it was – for any text engraving.

Speaking of text, we had to explode MTEXT into normal TEXT before sending it across (via DXF) into the laser cutting software. The text alignment was definitely inconsistent in the import results vs. inside AutoCAD, so some tweaking based on the way it looks in the software (and then gets printed) ended up being needed.

One of our test runs

Our next challenge is that we need to work out how to make each of the six segments of the puzzle a different colour. Choices include getting differently coloured MDF boards, painting or applying coloured laminate before the cutting operation, or just staining/painting the output after the fact. But that’s ultimately a detail… the results we have, so far, have been surprisingly good: we know they’ll fulfill the project requirements.

I also think this coding project deserves a bit more time, over the coming months… I may even end up working the puzzle generation into an AutoCAD I/O-powered website and service for one of my AU classes. :-)

April 27, 2015

Autodesk Answer Days kick off with AutoCAD on May 7th

Autodesk Answer Days

Autodesk is running a series of “Answer Days”, where you can get answers to your product questions directly – and hopefully in real-time – from Autodesk’s development and support teams. The first is for AutoCAD and will run from 6am and 6pm Pacific.

Answerers

It’s not just about the product, either. The ADN team will also be participating in this event, handling any questions you have around AutoCAD’s APIs.

To get answers during the event, it’s as simple as logging into the AutoCAD Community and creating a post on the AutoCAD Answer Day board on May 7th.

If you have questions about the event – which I suppose could be considered meta-questions ;-) – be sure to reach out to @AutodeskHelp on Twitter.

April 23, 2015

Creating jigsaw puzzles inside AutoCAD using .NET

Too. Much. Fun. As mentioned in the last post, a colleague came to me with a problem… for an internal team-building exercise, he needed to manufacture a circular, 60-piece jigsaw puzzle with 6 groups of 10 pieces, each of which should be roughly the same size. The pieces will also have some text engraved on them, but that’s a minor detail.

I searched the darkest corners of the Internet to find an online tool to generate a pattern for this, but then realised I’d spend my time more effectively by writing one myself and sharing it here. So that’s what we’re going to see today. The eventual goal is to laser cut the puzzle, of course, but first things first.

The first step was to work out the overall distribution of pieces. I fairly quickly worked out that 4 concentric rows of pieces containing 6 additional pieces in each row (i.e. rows with 6, 12, 18 & 24 pieces) would add up to 60. Then it was a matter of determining the radii of the various concentric rings to make the area the same for each piece – a fairly simple trigonometry problem. This left me with the basic grid of lines/arcs.

Our basic pattern

I created the outline with concentric circles (of course) and then polar arrays of lines. These I then exploded, resulting in 4 circles and the rest of the linear entities as short (non-contiguous) segments.

To generate a jigsaw pattern for the above outline, I decided on a couple of commands:

  1. JIGL is a command that takes the selected line segments and creates a spline at the location of each. This is for all the short, straight-line segments.
  2. JIG does the same thing, but works on a single curve (including circles – important for us, here) and allows us to select intersecting entities that define the limits of the curve section to process. This is for the concentric circles, and we select the appropriate radial lines as delimiters.

So how do we create a spline including a tab? It’s actually really easy. We take the end-points of the line (or the intersection of the circle and the selected entities) and then move inwards, adding 4 more fit points – 2 on the line, 2 at a distance from it – as you can see below:


A single segment

I used a random Boolean to decide which direction it gets created in (in the above case that means up or down) and an additional random factor that creates the tab at a slightly different position. I could also have varied the shape of the tab, for that matter… I think I’ll add that in v2 (as well as a command to add that random factor to existing tabs).

Here’s how these commands can be combined to create the puzzle. The first step shows the initial splines being created by JIGL, the subsequent frames show the JIG command being used to generate the remaining segments. I also added arcs to fill in gaps, as needed.


Creating the jigsaw

Here’s the C# source code:

using Autodesk.AutoCAD.ApplicationServices;

using Autodesk.AutoCAD.DatabaseServices;

using Autodesk.AutoCAD.EditorInput;

using Autodesk.AutoCAD.Geometry;

using Autodesk.AutoCAD.Runtime;

using System;

 

namespace JigsawGenerator

{

  public class Commands

  {

    [CommandMethod("JIG")]

    public void JigEntity()

    {

      var doc = Application.DocumentManager.MdiActiveDocument;

      if (null == doc)

        return;

      var db = doc.Database;

      var ed = doc.Editor;

 

      // Select our entity to create a tab for

 

      var peo = new PromptEntityOptions("\nSelect entity to jig");

      peo.SetRejectMessage("\nEntity must be a curve.");

      peo.AddAllowedClass(typeof(Curve), false);

 

      var per = ed.GetEntity(peo);

      if (per.Status != PromptStatus.OK)

        return;

 

      // We'll ask the user to select intersecting/delimiting

      // entities: if they choose none we use the whole length

 

      ed.WriteMessage(

        "\nSelect intersecting entities. " +

        "Hit enter to use whole entity."

      );

 

      var pso = new PromptSelectionOptions();

      var psr = ed.GetSelection();

      if (

        psr.Status != PromptStatus.OK &&

        psr.Status != PromptStatus.Error // No selection

      )

        return;

 

      using (var tr = doc.TransactionManager.StartTransaction())

      {

        // Open our main curve

 

        var cur =

          tr.GetObject(per.ObjectId, OpenMode.ForRead) as Curve;

 

        double start = 0, end = 0;

        bool bounded = false;

 

        if (cur != null)

        {

          // We'll collect the intersections, if we have

          // delimiting entities selected

 

          var pts = new Point3dCollection();

 

          if (psr.Value != null)

          {

            // Loop through and collect the intersections

 

            foreach (var id in psr.Value.GetObjectIds())

            {

              var ent = (Entity)tr.GetObject(id, OpenMode.ForRead);

 

              cur.IntersectWith(

                ent,

                Intersect.OnBothOperands,

                pts,

                IntPtr.Zero,

                IntPtr.Zero

              );

            }

          }

 

          ed.WriteMessage(

            "\nFound {0} intersection points.", pts.Count

          );

 

          // If we have no intersections, use the start and end

          // points

 

          if (pts.Count == 0)

          {

            start = cur.StartParam;

            end = cur.EndParam;

            pts.Add(cur.StartPoint);

            pts.Add(cur.EndPoint);

            bounded = true;

          }

          else if (pts.Count == 2)

          {

            start = cur.GetParameterAtPoint(pts[0]);

            end = cur.GetParameterAtPoint(pts[1]);

            bounded = true;

          }

 

          // If we have a bounded length, create our tab in a random

          // direction

 

          if (bounded)

          {

            var rnd = new Random();

            var left = rnd.NextDouble() >= 0.5;

 

            CreateTab(db, tr, cur, start, end, pts, left);

          }

        }

 

        tr.Commit();

      }

    }

 

    [CommandMethod("JIGL")]

    public void JigLines()

    {

      var doc = Application.DocumentManager.MdiActiveDocument;

      if (null == doc)

        return;

      var db = doc.Database;

      var ed = doc.Editor;

 

      // Here we're going to get a selection set, but only care

      // about lines

 

      var pso = new PromptSelectionOptions();

      var psr = ed.GetSelection();

      if (psr.Status != PromptStatus.OK)

        return;

 

      using (var tr = doc.TransactionManager.StartTransaction())

      {

        // We'll be generating random numbers to decide direction

        // for each tab

 

        var rnd = new Random();

 

        foreach (var id in psr.Value.GetObjectIds())

        {

          // We only care about lines

 

          var ln = tr.GetObject(id, OpenMode.ForRead) as Line;

          if (ln != null)

          {

            // Get the start and end points in a collection

 

            var pts =

              new Point3dCollection(

                new Point3d[] {

                  ln.StartPoint,

                  ln.EndPoint

                }

              );

 

            // Decide the direction (randomly) then create the tab

 

            var left = rnd.NextDouble() >= 0.5;

            CreateTab(

              db, tr, ln, ln.StartParam, ln.EndParam, pts, left

            );

          }

        }

        tr.Commit();

      }

    }

 

    private static void CreateTab(

      Database db, Transaction tr,

      Curve cur, double start, double end, Point3dCollection pts,

      bool left = true

    )

    {

      // Again we're going to generate random numbers

 

      var rnd = new Random();

 

      // We're calculating a random delta to adjust the location

      // of the tab along the length

 

      double delta = 0.1 * (rnd.NextDouble() - 0.5);

 

      // Calculate the length of this curve (or section)

 

      var len =

        Math.Abs(

          cur.GetDistanceAtParameter(end) -

          cur.GetDistanceAtParameter(start)

        );

 

      // We're going to offset to the side of the core curve for

      // the tab points. This is currently a fixed tab size

      // (could also make this proportional to the curve)

 

      double off = 0.5;

      double fac = 0.5 * (len - 0.5 * off) / len;

      if (left) off = -off;

 

      // Get the next parameter along the length of the curve

      // and add the point associated with it into our fit points

 

      var nxtParam = start + (end - start) * (fac + delta);

      var nxt = cur.GetPointAtParameter(nxtParam);

      pts.Insert(1, nxt);

 

      // Get the direction vector of the curve

 

      var vec = pts[1] - pts[0];

 

      // Rotate it by 90 degrees in the direction we chose,

      // then normalise it and use it to calculate the location

      // of the next point

 

      vec = vec.RotateBy(Math.PI * 0.5, Vector3d.ZAxis);

      vec = off * vec / vec.Length;

      pts.Insert(2, nxt + vec);

 

      // Now we calculate the mirror points to complete the

      // splines definition

 

      nxtParam = end - (end - start) * (fac - delta);

      nxt = cur.GetPointAtParameter(nxtParam);

      pts.Insert(3, nxt + vec);

      pts.Insert(4, nxt);

 

      // Finally we create our spline and add it to the modelspace

 

      var sp = new Spline(pts, 1, 0);

 

      var btr =

        (BlockTableRecord)tr.GetObject(

          SymbolUtilityServices.GetBlockModelSpaceId(db),

          OpenMode.ForWrite

        );

      btr.AppendEntity(sp);

      tr.AddNewlyCreatedDBObject(sp, true);

    }

  }

}

The JIG command is only really needed if you want non-recto-linear patterns. Here’s how the JIGL command deals with a straight rectangular grid (created using RECTANG, ARRAY, EXPLODE and OVERKILL, as we don’t want overlapping lines).


Creating a rectangular jigsaw

April 22, 2015

Extracting help IDs for AutoCAD’s UI Finder

In the last post we looked at some simple JavaScript code to automate AutoCAD’s UI Finder, locating a sequence of commands in the ribbon. In this post we’re going to look at how to generate a more extensive list directly from AutoCAD’s documentation.

The first step I took was to download and install the offline help for AutoCAD 2016. This gives us a set of HTML and JavaScript files in a local folder (c:\Program Files\Autodesk\AutoCAD 2016 Help/English/Help on my system). To parse the files I ended up using some old-school UNIX commands via my OS X environment (which shares the above folder via Parallels). I’m sure you can get these tools for Windows, too, but it seemed simplest (for me) to do it this way.

Here’s what I ended up writing to get the help IDs from the HTML source. I used grep to identify the lines of interest – although as these files are ultimately a single line all this really does is identify and concatenate the files containing the search string – and then piped the data into sed, allowing us to extract the help identifiers from the input.

To make it simpler to copy/paste the data into a JavaScript file, I surrounded each ID by double quotes and suffixed a comma.

grep -r '<span class=\\"uifinderbtn\\" data-id=\\"' * | \

sed 's/.*<span class=\\"uifinderbtn\\" data-id=\\"\([0-9A-Z_a-z ]*\)\\">Find<\/span>.*/"\1",/g' | \

sort | uniq

From there I used sort & uniq to get rid of any duplicates from the list.

Looking for help topics was very similar – it’s only the attribute name that changes slightly:

grep -r '<span class=\\"uifinderbtn\\" data-helptopic=\\"' * | \

sed 's/.*<span class=\\"uifinderbtn\\" data-helptopic=\\"\([0-9A-Z_a-z ]*\)\\">Find<\/span>.*/"\1",/g' | \

sort | uniq

I won’t post the list of IDs these two command sequences generated… suffice it to say that at 2 seconds per command I was waiting for several minutes watching the UI Finder do its thing. Which was cool in a very geeky way.

To take this series further, I’m going to need to spend some time writing a palette-based UI. I won’t be doing that immediately, though… today I managed to get side-tracked by an interesting project someone brought to me in desperation: for an internal team-building activity they need a 60-piece circular jigsaw puzzle (whether they end up 3D printing it or – and this was my suggestion – use a laser cutter to create it). So yes, this afternoon I spent some time writing a basic AutoCAD app to help generate jigsaw puzzles. Which we’ll look at next time. :-)

April 20, 2015

Invoking AutoCAD’s UI Finder from JavaScript

On my flight back from Singapore I started thinking about how an app might help people discover what’s new in the AutoCAD UI from release to release. This might also work for custom functionality, but that’s not (currently) my main concern. I was thinking of displaying some kind of palette that cycles through the new commands and features in a release, highlighting the associated ribbon buttons, etc., using the AutoCAD help system’s excellent UI Finder capability.

Over the weekend I started looking at how it might work – whether it was possible to call the UI Finder from an app (not just the web-based help, itself). Here’s what I found out: given the right “help IDs”, we can launch the HelpFindUI() method (which we call using exec() from JavaScript) and have it highlight the UI capability we’re interested in.

Here’s a quick demo to illustrate. At this stage it’s just looping through a fixed set of commands – and with no UI to explain what’s happening – but it should clarify the intention, at least.


UI Finder automation

There are a couple of points to note: the name of the function we need to call has a new prefix in AutoCAD 2016, so we have to code around that. I also found that we need to launch HELP for the HelpFindUI() method to be available: presumably a module gets loaded or something else gets initialized when this happens. I looked into what, specifically, but ended up deciding to launch HELP when our JavaScript gets loaded… we can tweak that, later on, but it does the trick, for now.

Here’s the JavaScript code to make this happen:

// A few command IDs to loop through

 

var dispCmds =

  ["ID_Box",

   "ID_Scale",

   "ID_Extrude",

   "ID_Cone",

   "ID_3dalign",

   "ID_CONV2SOLID",

   "ID_Import",

   "ID_Offset",

   ""

  ];

 

// A few topic IDs to loop through

 

var dispTops =

  [

    "3DDWF",

    "3DPRINT",

    "VIEWPLOTDETAILS",

    ""

  ];

 

function findBox() {

  locateCommand(dispCmds[0], false); // 1st item in the array

}

 

function findCmds() {

  locateCommands(dispCmds.slice(0), false); // Copies the array

}

 

function findTops() {

  locateCommands(dispTops.slice(0), true); // Copies the array

}

 

function locateCommands(cmds, topic) {

 

  // Pop the first item off the array

 

  var cmd = cmds.shift();

 

  if (cmd != null) {

 

    // Find a single command/UI element

 

    locateCommand(cmd, topic);

 

    // Find the rest, after a 2 second delay

 

    setTimeout(function () { locateCommands(cmds); }, 2000);

  }

}

 

function locateCommand(id, topic) {

 

  var fn = "HelpFindUI";

 

  // In 2016 onwards we need the help_Api prefix

 

  if (typeof (apiVersion) == 'function' && apiVersion() > 2)

    fn = "help_Api." + fn;

 

  // Package up the JSON required to call our function

 

  var json = {

    functionName: fn,

    invokeAsCommand: false,

    functionParams: { ID: id, IsTopic: topic }

  };

 

  // Call it (ignore the return value, unless debugging)

 

  var ret = exec(JSON.stringify(json));

}

 

Acad.Editor.addCommand(

  "HELP_CMDS",

  "FC",

  "FC",

  Acad.CommandFlag.MODAL,

  findBox

);

 

Acad.Editor.addCommand(

  "HELP_CMDS",

  "FCS",

  "FCS",

  Acad.CommandFlag.MODAL,

  findCmds

);

 

Acad.Editor.addCommand(

  "HELP_CMDS",

  "FTS",

  "FTS",

  Acad.CommandFlag.MODAL,

  findTops

);

 

// Launch HELP at the end of the command... this allows the

// UI finder to work properly

 

Acad.Editor.executeCommand("_.HELP");

If you save this to a .js file, you can then WEBLOAD it into AutoCAD 2015 or 2016 and give it a try.

The code implements a few commands: FC highlights a single command button (the one for BOX), while FCS loops through a sequence of such commands with a 2 second delay between each. FTS does the same but for help topics rather than individual commands, although it doesn’t seem to work as consistently as the command-locating version, for now.

To take this further, I’m going to look at locating the various help IDs from the documentation. I may even end up hitting the command-line for an old school grep & sed session, which might be fun. At some point I’ll also look at implementing some kind of web-based UI to drive this… probably cycling through a list of commands in a palette while highlighting the associated UI elements.

April 17, 2015

AU2015 Call for Proposals opens next week

AU

The countdown for Autodesk University 2015 starts with the Call for Proposals opening on April 22nd. It will remain open until May 26th. I have a few ideas for possible topics to present at this year’s event…

  • Virtual Reality using Autodesk’s View & Data API
  • Developing JavaScript applications for AutoCAD using TypeScript
  • Processing drawings in the cloud using AutoCAD I/O

They all seem to be valid topics, but we’ll see if I can find the energy and motivation to submit all three. If you have an opinion on what you’d like to hear me talk about – even if you don’t expect to attend the event… any topic I cover will inevitably lead to multiple, related blog posts :-) – then please do post a comment.

I’ve heard from a few of you who are considering submitting proposals for this year’s AU. Please do! It’s a great way to increase your visibility within the Autodesk community and to expand your network. And it’s just plain fun. :-)

Looking forward to seeing you later in the year in Las Vegas!

April 16, 2015

New versions of Autodesk’s Reality Computing products

A couple of updates from our Reality Solutions team…

ReCap 360 UltimateFirstly, a new, expanded portfolio of ReCap products is now available for download. They include the newly branded ReCap 360 Ultimate – previously known as ReCap Pro. For more information head on over to this blog post.ReCap 360 Ultimate- Red Rocks

Secondly, Autodesk Memento has been updated to version 1.0.15.10. Here’s what’s new in this release:

  • Live update mechanism to deliver minor updates in the future
  • Extrude a boundary
  • Surface Sculpt
  • Smart selection to select planar & organic surfaces with strokes
  • Photo validator to check for photos that will not stitch
  • Ability to toggle between Navigation-only and Selection modes - Shortcut key 'space bar'
  • First person navigation - Shortcut key 'Tab' to cycle through the two navigation modes
  • X-ray mode (to see occluded surfaces/defects) – Shortcut key '1' to toggle and '-/+' to control the depth
  • Fixed most of the bugs reported on the forum

As I’m here in Singapore I’m heading for lunch tomorrow with Murali Pappoppula, whose team develops the Memento product. I’m looking forward to finding out what is in the pipeline for this product – it’s really coming along in leaps and bounds.

April 11, 2015

A dream come true

Today’s my birthday, which I’m spending with a fairly intense work schedule here in Singapore. My family gave me my present early on Saturday night, before I flew out on Sunday morning. To my absolute delight, it’s a Stormtrooper onesy, which I suspect is going to soon become my preferred WFH uniform.

Stormtooper Kean

There’s a reason I’m looking so happy. When I was growing up there were children who wanted to be Luke Skywalker or Han Solo during our Star Wars re-enactments. I wanted to be a Stormtrooper. Which I have no doubt says something very important about my personality. When I finally get around to getting psychoanalysed, I’ll be sure to let you know what they say.

And so it turns out I’ve also found one possible Ultimate Question of of Life, the Universe and Everything: the age at which you should strive to fulfill your remaining childhood dreams. Surely not a bad goal to aim for! :-)

April 10, 2015

First impressions of the FARO Freestyle3D scanner

Back when FARO announced their new Freestyle3D handheld scanner, I contacted them to see whether they might have one for me to take a look at. They very kindly obliged, and a few weeks ago I received a loaner model in the post.

Boxed Freestyle3D

I won’t be writing an exhaustive review – at least not in this post – but I did want to share my first impressions, mainly to capture them for future discussion. Bear in mind that most of what I’m writing here is personal opinion and the rest is pure speculation :-). Hopefully someone at FARO will be able to point out any factual inaccuracies so I can correct them.

Of course my primary interest in the scanner was to get it working in some way with AutoCAD, and ideally without a lot of the hurdles I jumped through when integrating Kinect Fusion (in many ways a comparable system). Before seeing whether that was possible, let’s take a look at some of the important points about the Freestyle3D scanner.

Much like Kinect v1, the Freestyle3D is a structured light scanner: it projects a pattern of infrared dots and detects their deformation. Like the first Kinect, it has a range of 50cm to 3m. That’s about where the similarities to Kinect end, though.

While Kinect Fusion requires a desktop class PC to run the Kinect runtime – essentially reconstructing a watertight 3D mesh in real(ish) time – the FARO system takes a different approach. More on this in a little while. One of the reasons Kinect Fusion has such heavyweight requirements is that it’s performing energy minimisation calculations between consecutive point-cloud frames – albeit in a highly parallellised fashion via the GPU – to determine what additional data is contributing to the mesh.

The Freestyle3D can work with any PC, but comes bundled with a Surface Pro 3 with FARO’s SCENE Capture software pre-installed. This is a great way to perform captures in an (largely) untethered way: there’s a wrist strap for the Surface Pro and you carry it around along with the scanner in your other hand (they are connected to each other by a USB 3 cable).

Freestyle3D and Surface Pro 3

[In many ways the Surface Pro 3 is the device of choice for many Windows-centric software vendors to meet the needs of mobile customers: Siemens seemed to base their whole mobile pitch around it at the recent Develop3D Live event, for instance. It certainly has the horsepower to run moderately heavyweight desktop software without a significant amount of UI rework needed.]

So how is the Freestyle3D’s scanning approach different from Kinect Fusion? Rather than requiring a heavyweight graphics card to basically “diff” the point clouds for each frame coming from the scanner, SCENE Capture uses Visual SLAM to determine how the scanner is moving through space. It’s basically using computer vision to extract features – edges, corners, etc. – from the camera input and then uses these data-points to track how the scanner is moving through 3D space. You’ll notice, for instance, that tracking is very dependent on light levels: if there’s insufficient clarity in the image coming from the camera, the software has trouble extracting enough features and therefore tracking the scanner’s location.

Scanning a Morgan

This means a few things. Firstly, it’s a lot snappier: while you have to move slowly – and the software warns you when you’re starting to go too quickly – tracking is a lot more reliable than I was used to with Kinect Fusion. Secondly, you’re not working with a voxelised 3D volume – a closed mesh – you’re building a point cloud. Which means you’re going to see more noise, especially when scanning reflective surfaces, the Achilles heel of the 3D scanning world.

When tracking does get lost, the visual feedback is actually fairly good…Tracking is lost

… you’re given decent visual clues as to where you need to place the scanner for tracking to be restored:

Time to reposition the scanner

Capturing is therefore fairly painless. I’m by no means an expert user of SCENE, but I managed to work out most of what I needed. It apparently provides the capability to edit out erroneous frames from a scan – something I can see might be needed, as in a few of my longer scans I found that I had multiple planes for the floor or one of the walls. I’m sure this is down to user error, but it certainly highlights the fact you need a certain level of expertise to avoid this scenario (probably by creating and merging multiple, smaller scans).

The final point cloud

One thing that absolutely needs work is the workflow from SCENE to Autodesk software. When you install SCENE you can see that it includes an Autodesk component called DeCap (this is an Autodesk SDK that can be used to create RCS and RCP files… it’s basically “headless” ReCap ;-). Unfortunately the SCENE software doesn’t seem to use this directly, at the time of writing (v5.4). I found I had to export to another format – whether .E57 or .PTX, sometimes one worked better than the other – and then import that into ReCap Studio to generate a .RCS or .RCP file that can be imported into AutoCAD.

So quite a convoluted process to get the data across from the Freestyle3D into AutoCAD. I’m told that both FARO and Autodesk are working on improving this workflow, so I don’t expect the pain to continue forever. It’s still early days, of course.

Scanned Morgan

I’d also love to see this scanner feed Autodesk Memento, generating a mesh rather than a point cloud. This kind of integration is largely working with Artec’s scanners, today, but not yet with other devices such as the Freestyle3D.

Overall I found it very interesting working with the Freestyle3D. I’m very curious to see how this technology – and the supporting workflow – evolves, over time. I’m sadly having to ship it back in the next few days, but I have stored a fairly varied set of captures that I intend to work on, when I get the chance. For instance I fully intend to try extracting floorplans from an office space, as mentioned the last post.

April 08, 2015

AutoCAD 2016: Extracting floorplans from point clouds using .NET

A little while ago you may remember an HTML progress meter I created while looking at “future API features”. The API feature in question was of course for AutoCAD 2016, and related to the extraction of floorplans programmatically using .NET, a topic we’re covering in today’s post.

We’re going to see some fairly basic code that asks AutoCAD to analyse a point cloud – that we’re going to attach from an RCS or RCP file – and generate polyline boundaries for its floorplan.

Now I didn’t actually have a great point cloud to test this, so I ended up using one I’d captured when testing the Kinect v2 sensor:

Kinect point cloud

[On a related note, I’ve been working with a FARO Freestyle3D scanner for the last few weeks: once I publish more on using that I’ll hopefully revisit this code to see how it performs on a more real-world dataset.]

The .NET API has a more rudimentary set of extraction features than the C++ API, during this release: in ObjectARX you have the AcPointCloudExtractedCylinder class, for instance, which presumably means you can use the extraction feature also to extract cylinders rather than just polylines.

Here’s some C# code implementing the EFP command (for ExtractFloorPlan) that makes use of the code in this previous post as well as the HTML progress meter file you can find here:

using Autodesk.AutoCAD.ApplicationServices;

using Autodesk.AutoCAD.Colors;

using Autodesk.AutoCAD.DatabaseServices;

using Autodesk.AutoCAD.Geometry;

using Autodesk.AutoCAD.Runtime;

using System;

 

namespace PointCloudAnalysis

{

  public class Commands

  {

    [CommandMethod("EFP")]

    public void ExtractFloorPlan()

    {

      var doc = Application.DocumentManager.MdiActiveDocument;

      var db = doc.Database;

      var ed = doc.Editor;

 

      using (var tr = db.TransactionManager.StartTransaction())

      {

        var msId = SymbolUtilityServices.GetBlockModelSpaceId(db);

 

        // First let's attach the point cloud from the RCS (or RCP)

 

        var pcId =

          PointCloudEx.AttachPointCloud(

            "c:\\temp\\pc.rcs",

            new Point3d (0,0,1),

            1,

            0,

            db

          );

 

        // And get it as a PointCloudEx object

 

        var pc =

          tr.GetObject(pcId, OpenMode.ForRead) as PointCloudEx;

 

        // Might also want to adjust MiniumSegmentLength and FillGap

 

        var eo = new ExtractOption();

        eo.ExtractType = ExtractionType.AllLine;

 

        // Attempt an extraction

 

        try

        {

          var res =

            PointCloudExtractor.Extract(

              pc,

              Vector3d.ZAxis,

              Vector3d.XAxis,

              Point3d.Origin,

              eo,

              new DisplayPointCloudExtractionProgress()

            );

 

          // If we have results...

 

          if (res != null)

          {

            // Add the various polyline profiles to the modelspace

            // (and make them red)

 

            var col = Color.FromColorIndex(ColorMethod.ByAci, 1);

            var ids =

              PointCloudExtractor.AppendPolylineProfile(

                res, msId, "0", col, 0.0

              );

          }

        }

        catch

        {

          pc.UpgradeOpen();

          pc.Erase();

        }

 

        tr.Commit();

      }

    }

  }

 

  public class DisplayPointCloudExtractionProgress

    : IPointCloudExtractionProgressCallback

  {

    private ProgressMeterHtml _pm;

    private int _ticks;

    private bool _started;

 

    public DisplayPointCloudExtractionProgress()

    {

      _pm = new ProgressMeterHtml();

      _pm.SetLimit(100);

 

      _ticks = 0;

      _started = false;

    }

 

    public void End()

    {

      _pm.Stop();

    }

 

    public void Cancel()

    {

      _pm.Cancel();

    }

 

    public bool Cancelled()

    {

      if (_pm.Cancelled)

      {

        _pm.AdditionalInfo(" ");

        _pm.Stop();

        return true;

      }

      return false;

    }

 

    public void UpdateRemainTime(double t)

    {

      if (t > 0)

      {

        _pm.AdditionalInfo(

          Math.Round(t, 2).ToString() + " seconds remaining."

        );

      }

    }

 

    public void UpdateCaption(string s)

    {

      if (_started)

      {

        _pm.Caption(s);

      }

      else

      {

        _pm.Start(s);

        _started = true;

      }

    }

 

    public void UpdateProgress(int i)

    {

      while (_ticks < i)

      {

        _pm.MeterProgress();

        System.Windows.Forms.Application.DoEvents();

        _ticks++;

      }

 

      System.Windows.Forms.Application.DoEvents();

 

      if (i == 100)

      {

        End();

      }

    }

  }

}

Here’s how the EFP command works when run against the above-mentioned point cloud.


ExtractFloorPlan

Now let’s take a closer look at the results. You can at least see we have some manner of boundary extracted: if we tweak the extraction options we can presumably increase the accuracy (something we’ll try to look at when we have a properly-captured point cloud of an office space to work with).


Point cloud analysis

Feed/Share


10 Random Posts