Through the Interface: Updated AutoCAD + Kinect for Windows samples

May 2015

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30


« A simple MRU browser for AutoCAD data using WinRT | Main | Commands that “work” in the AutoCAD 2013 Core Console »

March 21, 2012

Updated AutoCAD + Kinect for Windows samples

Thanks to some recent coverage on a Channel 9 blog (which I consider a great honour – I’ve been a huge fan of Channel 9 since its inception :-), I decided to get around to posting an update to the AutoCAD + Kinect samples I demonstrated at AU 2011.

While attending the recent hackathon, I spent a fair amount of time porting my AutoCAD-Kinect integration samples from the Beta 2 of the Microsoft Kinect SDK (for the Kinect for Xbox 360 sensor) to the released SDK for the Kinect for Windows.

It was pretty impressive just how many of the Kinect APIs were broken between Beta 2 SDK and the final release (you can hear some additional commentary on this in this .NET Rocks episode). I understand the desire to get the API in shape prior to release – you really want to minimise any messy, legacy dependencies – but the full extent of the changes certainly took me by surprise. The good news is that there are a number of APIs that make point cloud creation simpler, for instance, so I was able to pull out some of the more icky code from the samples.

Anyway, here are the updated samples. If you’d like to diff the changes with the original version for the Beta 2 SDK, here these are, too, although be warned that I did take the opportunity to do some refactoring, particularly with respect to the speech recognition capability (I’ve now added some events to allow derived classes to add words to be recognised – previously it was all munged into base KinectJig class).

Point cloud in AutoCAD via Kinect

Aside from the complexity of the migration (which was thankfully minimised due to the fact I’d previously implemented a base class with much of the needed functionality), there were a few behaviours that still seem a little quirky: once started, the sensor now never seems to turn off, even after it has been stopped. You can unplug it, of course, but that doesn’t seem quite right. And having the red light stay on makes me feel like I’m being watched – it reminds me of the glowing red eyes in Terminator 2 (gulp).

Also, at first the speech recognition seemed more flakey, but that no longer seems to be the case: I’ve been working on a command to let you digitise a spline by saying voice commands like “start”, “point” and “end”, but I didn’t feel comfortable getting this working well at the hackathon (there’s nothing more disturbing than someone waving their hands and shouting at their computer when you’re trying to solve a tricky coding problem :-). Now that I’ve had some time in a more private coding environment, things seem to be working well enough.

I did follow the updated Kinect audio samples that implement a 4-second delay for speech recognition to initialise properly, so perhaps that helped, too.

I’ll put the finishing touches on the spline creation sample – which I see as potentially handy when digitising real-world objects – and get that posted sometime in the coming weeks.

blog comments powered by Disqus


10 Random Posts