I was hoping to get the Kinect for Windows 2.0 pre-release developer kit in time for AU – so I can talk about it intelligently, even if I don’t end up demoing it – so when the UPS delivery man showed up, yesterday, I was very happy.
As always, it started with a simple box:
Which, once opened, revealed some nice monochrome graphics:
And the device itself, of course (tripod is model’s own ;-).
The addition of the tripod screw-hole on the bottom is very welcome: I ended up buying a somewhat flimsy and unstable 3rd party tripod for my first generation Kinect devices, so being able to use something more sturdy is a breath of fresh air.
The pre-release graphics are covering the underlying Xboxiness of the device (the first pre-release sensor is basically the Xbox One sensor with an external adaptor allowing it to be used via USB 3.0 – you can see the Xbox logo glowing through the sticker on the right when it’s active). An external power source is currently needed via a fairly serious power brick with various per-country plug sections provided in the box. The one I have is 240v, so I’m not even going to bother carrying the v2 kit all the way to Las Vegas, next week.
The v2 device doesn’t have a tilt motor: the larger vertical field of view makes it redundant, which I’m sure reduces the risk of mechanical issues as well as allowing control code to be removed from the SDK.
Next up – after plugging in – was to run the KinectStatus tool:
If running the standard samples – or your own custom code – then you need to launch the KinectService tool, which brings up a status window in a command-prompt and remains active until you cancel it.
Here are the depth and skeleton (which in the v2 SDK is now called a “body”, so yes, the API now has a BodyCount property for the number of skeletons ;-) samples in action:
You can see a few additional joints in the hands and neck, in particular, and the hips are also more biologically accurate.
So then on to AutoCAD. There was a fair amount of work needed to get the standard point cloud import sample working well, but most of it felt really cathartic: a lot of old code could be ripped out – including event handlers to wait for the frames to come through, as it’s possible to read the most recent frame(s) just when needed via a MultiSourceFrameReader. It’s also possible to specific exactly which frames your application cares about via this object, which is perfect. The API changes so far seem coherent and clean.
Here’s a shot of point cloud input brought into AutoCAD from Kinect for Windows v2:
So far, so good. The depth resolution is currently more or less comparable with the v1 device – the point cloud above is generated from a 512 x 424 depth frame – but the precision certainly seems better (and we’re working with 1080p RGB data, so we also have more pixels to choose from for a single depth point). And this could very well change in months prior to the final release, of course.
I’m particularly looking forward to Kinect Fusion being part of the SDK, but will continue working on updating the samples that don’t rely on capabilities from the Developer Toolkit, in the meantime.
That’s it from me until I get to Vegas: I squeezed my week’s posts into the first three days – as I know lots of readers will be offline for Thanksgiving – and I still have a few last-minute tweaks to make on the material for next week’s sessions. My next post is very likely to be from AU 2013!