Following on from the introduction to this series – and to the Kinect for Windows v2 sensor – it’s time to take a closer look at some of the AutoCAD integration samples.
At the core of the Kinect sensor’s capabilities are really two things: the ability to capture depth data and to detect people’s bodies in the field of view. There are additional bells and whistles such as audio support, Kinect Fusion and face tracking, but the foundation is really about RGB-D input and the additional runtime analysis required to track humans.
Let’s take a look at both of these. I’ve captured the below animated GIFs – keeping them as lightweight as possible, so the site still loads in a reasonable time – that demonstrate these two foundational capabilities.
Capturing point clouds in AutoCAD can be achieved with the original KINECT command (along with variants such as KINBOUNDS and KINSNAPS, which enable clipping and timelapse capture, respectively).
Here’s an example capture with a lot of the frames edited out: the point here is to show the approximate quality of the capture, rather than the smoothness of the processing.
This is all from a single frame of RGB-D data – you can clearly capture much more with Kinect Fusion (which we’ll look at next time).
The above point cloud has 183,810 points (the theoretical maximum being 512 x 424 = 217,088 points, given the resolution of the depth camera). This is a fair bit higher than with KfW v1, which had a theoretical depth resolution of 320 x 240 (i.e. a max of 76,800 points), although it seemed to give you more when you mapped 640 x 480 RGB data to depth. You were presumably getting multiple adjacent – and potentially differently coloured – pixels with the same depth values, as the generated point clouds could indeed contain up to about 300K points. The quality of the depth data is certainly better with KfW v2 while the processing is also snappier.
For instance, let’s take a look at skeleton tracking, which can be shown inside AutoCAD using the KINSKEL command:
(There’s also a command that displays point cloud data overlaid with the skeleton data – the KINBOTH command – but I’m not showing it in this post. You can imagine how it works. :-)
Skeleton tracking is much smoother and cleaner than before – it’s much less jerky and with fewer flailing limbs – even if we’re still a ways off doing industrial-strength Mo-cap. The joint count has increased to 25 from the previous 20: there’s an additional joint in the neck and two more in each of the hands. Which means you get hand tracking – you can see whether the hand is open or closed, between the fingertips and the thumb – although not finger tracking, even if the sensor resolution is probably now sufficient for that, too.
Aside from the increase in the number of joints, their placement is now more biologically accurate, too. The hips and neck are much closer to where they are in real life with this version of the device.
This more precise skeletal tracking can clearly enable tighter application integrations. For instance, as we saw with this previous post, the KINEXT2 command – which sweeps pipes along the path through the air that your hand takes – can now use the distance between your thumb and your fingertips to specify the diameter of the circular extrusion profile. Which is much more precise than using the distance between both hands. (In fact the KINNEAR system variable – which previously enabled the now-redundant “near mode” with KfW v1 – is now only used to choose between these two different approaches for specifying the profile radius.)
Other commands for drawing geometry are also still there in the application, of course, such as KINPOLY – for drawing 3D polylines in space – and KINEXT – which works in a similar way to KINEXT2, but this this time with a pre-specified profile radius.
That’s it for this run through of the basic commands integrating depth and skeleton data into AutoCAD. In the next post we’ll move on to two more advanced capabilities, face tracking and Kinect Fusion.