Dan

Rob Relyea: Kinect for Windows SDK Beta 2 Released!

Sign in to queue

Description

The Kinect for Windows SDK Beta 2 is now available for download! Rob Relyea joins us to give a refresher on what the Kinect SDK can do, and what's new for Beta 2. There are a number of under-the-hood improvements including faster and more accurate skeletal tracking, support for x64, and support for multi-core machines. They've also added a new StatusChanged event to know when a Kinect has been connected, disconnected, or doesn't have enough power, as well as new APIs to better manage using multiple Kinects. Watch for more details!

 

Embed

Download

Download this episode

The Discussion

  • User profile image
    Alex_Toledo

    Great job on allocations and multi-core support!

  • User profile image
    Richard.Hein

    Great improvements.  Smiley  The possibilities are endless.

  • User profile image
    Chris

    Very nice, but for the Kinect website, wouldn't it make sense for you to have some Kinect enabled feature(s) on display to interact with visitors?

  • User profile image
    aL3891

    great stuff!

  • User profile image
    Rob Relyea

    @Alex_Toldeo, Richard.Hein, aL3891 - thanks.

    @Chris - will pass that idea along to our website folks. wouldn't be a slam dunk right now, because getting Kinect working in the browser isn't fully plumbed.

    All-
    Ch9 posted a good summary post linking to all the new stuff (new site, new blog, new twitter, new release, etc...) from yesterday: https://channel9.msdn.com/coding4fun/kinect/Happy-Birthday-Kinect

     

  • User profile image
    Burkholder

    Does the SDK provide functions/classes that automatically align/register the RGB camera images and the depth camera images?

    In other words, if I'm indexing a pixel on someone's face in the RGB image, can I easily get the associated depth measurement?  Or do I have to do all that alignment/registration between the RGB and depth images myself?

    Similarly, if I'm in the depth image at some (row,column,depth) coordinate and I want to get the RGB value that should be at that coordinate, does the SDK return the correct RGB value?  Or does it just give me whatever the RGB value is (however incorrect) at the (row,column) coordinate such that I would still have to do all that offset/interpolation coding between the RGB and depth images myself to come up with the correct RGB value?

  • User profile image
    Ahmad

    is there any way to initialize skeletal tracking on two kinects at the same time?

  • User profile image
    Mo Relax

    Please does anyone know how I can access the Shape Game demo included in the SDK?

  • User profile image
    Bas

    Excellent stuff, can't wait to try it. The allocation and performance stuff is much appreciated.

    One thing I would've liked to see here is clearer naming conventions for the coordinate space conversion methods. DepthImageToSkeleton, SkeletonToDepthImage, and GetColorPixelCoordinatesFromDepthPixel are all methods that deal with the conversion from one coordinate space to another but they all have a different naming style. I have to stop and think or look them up every time I use them simply because the naming is so confusing. I'd much prefer it if they were called GetDepthCoordinatesFromSkeletonSpace, GetSkeletonCoordinatesFromDepthSpace and GetColorCoordinatesFromDepthSpace, for instance. Not only are the method names then following the same naming scheme, it's also a lot clearer what they actually do as opposed to "DepthImageToSkeleton", which doesn't even have a verb. Something to consider, hopefully.

  • User profile image
    Rob Relyea

    @Burkholder - look at the set of related functions that Bas mentions. I'll have to check if we have a Color -> Depth mapping. Most scenarios I've seen that deal with both color and depth, start with Depth, and get the appropriate Color for each "pixel" by calling GetColorPixelCoordinatesFromDepthPixel.

    @Bas - yes, the naming and locations of all those "sensor space mapping" methods is a bit suspect. We've not yet been able to go make sense of them, but have many suggestions from people like you to do so. We understand the need.

    @Ahmad - Not with our beta2 build. You may not, in the same process, use 2 kinects (with skeletal tracking turned on both). I believe that having 2 apps on the same machine...would allow both apps to use a different Kinect and get skeletal tracking going.

    @Mo Relax - take a look at the readme and the documentation and the program files entry for the sdk and you'll find the location. %KINECTSDK_DIR%\Samples\Bin has a compiled version of that app. %KINECTSDK_DIR%\Samples\ also (in b2) has a .zip file with the source code for all the C#/C++ samples. (unzip to my docs, or related for best results). We are considering building a sample browser app (like Xbox SDK and DirectX SDK) to make it easy to run the different samples, or install the source code, or jump to docs about the sample.

    sorry for the delay...it has been a crazy week!

  • User profile image
    melvin

    hi rob,is it posible to make skeletal tracking at the same time (different kinect) by using microsoft sdk ?

Add Your 2 Cents