AutoCAD'ing with the Kinect for Windows v2
Kean Walmsley, long time Friend of the Gallery and AutoCAD .NET integration guy (if you need to use AutoCAD and .NET you've got to read his blog), who in his spare time, is also a Kinect enthusiast, recently released a four part series on the Kinect for Windows v2 Device, SDK and AutoCAD.
Friend of the Gallery? Man, just look at all the times we've highlighted his work. :)
- Kinect for Windows v2 SDK Updated (Now with Fusion!)
- Fusing the Kinect, AutoCAD and Kinect Fusion (with some C#)
- AutoCAD integration samples updated for Kinect SDK v1.6
- Kinect to AutoCAD v1.5 and some AutoCAD Face Tracking too
- AutoCAD and the Kinect for v1
- AutoCAD and the Kinect
I've grabbed a few snips from each post in his recent series. Please make sure you click through and read them all...
The latest KfW device is a big step up from the original Kinect for Xbox 360 and Kinect for Windows v1 devices: for starters you get about 3 times the depth data and high-definition colour. This round of the Kinect technology is based on a custom CMOS sensor that uses time-of-flight rather than structured light to perceive depth – Microsoft moved away from using PrimeSense as a technology provider some time ago, well before their acquisition by Apple.
KfW v2 has a better range than KfW v1 – it has no need for a tilt motor or near mode – and it’s much less sensitive to daylight (I haven’t yet tried it outside, but I will!). This is really an impressive piece of tech… in many ways you’re effectively getting a laser scanner at the ridiculously low price of $200.
There are definitely some things to be aware of, however. The latest SDK is now Windows 8/8.1 only, which I have will no doubt exclude a number of people wanting to use this on on Windows 7. (I run Windows 8.1 on the machine I use for Kinect work – as I need a native OS install with GPU usage for Kinect Fusion, even if the rest of the SDK can function from inside a VM, such as my day-to-day system running Windows 7 via Parallels Desktop on OS X – so I’m thankfully not impacted by that particular decision.) The device also requires USB 3 – naturally enough, given the data throughput needed – and requires additional, external power in much the same way as KfW v1 did.
One other important “platform” consideration when using these samples… I’m tending to use them on AutoCAD 2014 rather than 2015. They do work on 2015, but as with this release we’ve completed the shift across from PCG to RCS/RCP for our native point cloud format it’s not currently possible to index text files into native point cloud files (as we can do in AutoCAD 2014 using POINTCLOUDINDEX). Which is a bit of a gap for developers wanting to generate and programmatically import point cloud data into AutoCAD: there’s currently a manual step needed, where the user indexes an .xyz file into .rcs using ReCap Studio before attaching it inside AutoCAD 2015. (This isn’t the end of the story, hopefully: I’m working with the ReCap team to see what’s possible, moving forwards. If you have a specific need for custom point cloud import into AutoCAD that you’d like to see addressed, please do let me know.)
it’s time to take a closer look at some of the AutoCAD integration samples.
At the core of the Kinect sensor’s capabilities are really two things: the ability to capture depth data and to detect people’s bodies in the field of view. There are additional bells and whistles such as audio support, Kinect Fusion and face tracking, but the foundation is really about RGB-D input and the additional runtime analysis required to track humans.
Let’s take a look at both of these. I’ve captured the below animated GIFs – keeping them as lightweight as possible, so the site still loads in a reasonable time – that demonstrate these two foundational capabilities.
Capturing point clouds in AutoCAD can be achieved with the original KINECT command (along with variants such as KINBOUNDS and KINSNAPS, which enable clipping and timelapse capture, respectively).
Here’s an example capture with a lot of the frames edited out: the point here is to show the approximate quality of the capture, rather than the smoothness of the processing.
Today’s post looks at face tracking and – to some degree, at least – Kinect Fusion, two more advanced Kinect SDK features that go some way above and beyond the standard samples we saw in the last post. In Kinect for Windows v1, these features belong to an additional “developer toolkit”, although they appear to have been fully integrated into the core Kinect SDK for v2. At least that’s the case in the preview SDK.
There are some additional runtime components you’ll need to copy across into AutoCAD’s program files folder to make use of these features: you’ll need Kinect20.Face.dll for face tracking and Kinect20.Fusion.dll for Kinect Fusion – both can be found with x86 and x64 versions in the Redist folder of the Kinect SDK. For face tracking there’s an additional NuiDatabase folder that will also need to be copied across that contains around 40 MB of (presumably) model data to help with face detection and tracking processes.
These are in addition to the Microsoft.Kinect.Face.dll and Microsoft.Kinect.Fusion.dll .NET assemblies the samples project reference, of course.
A quick word on how I create and maintain these samples: ...
it’s time for (in my opinion) the most interesting piece of functionality provided on the Kinect SDK, Kinect Fusion.
Kinect Fusion is a straightforward way to capture 3D volumes – allowing you to move the Kinect sensor around to capture objects from different angles – and the KINFUS command in these integration samples let’s you bring the captured data into AutoCAD. Which basically turns Kinect into a low-cost – and reasonably effective – 3D scanner for AutoCAD.
The KINFUS command (I’ve now done away with having a monochrome KINFUS command and a colour KINFUSCOL command… KINFUS now just creates captures with colour) hosts the Kinect Fusion runtime component and provides visual feedback on the volume as you map it, finally giving you options for bringing the data into your active drawing. Much as I’ve talked about in the past for KfW v1, although this version has a few differences.
Firstly, as mentioned recently, the KfW v2 implementation of Kinect Fusion is much more stable: while with v1 it was not really viable to effectively run Kinect Fusion within a 3D design app – the processing lag when marshaling 3D data made it close to unusable, at least in my experience – with v2 things are much better. It’s quite possible that much of this improvement stems from the use of a “camera pose” database, which makes tracking between frames much more reliable....