Jay Schmelzer: Introducing Visual Studio LightSwitch

Now that the Kinect for Windows SDK Beta 1 Refresh is live, we had a chance to talk to Rob Relyea a Principal Program Manager Lead on the Kinect for Windows SDK team to talk about what changed in the Beta 1 Refresh.
If you're on Twitter, make sure to follow the KinectSDKTeam for more information.
Errr.... so Rob Relyea is not working on WPF vNext right ?
@felix9: No, he's now on the Kinect for Windows SDK team
good stuff still hoping for an option to include depth data in skeleton frames, that would make syncing a breeze
All - The details of most of the changes that came with this release are published here: http://bit.ly/KinectSDKBeta1RefreshDetails
@felix9: yes, I moved to the Kinect for Windows team in June, and had been working parttime in my new role for a month before. Announcement twitter post from June 20th.
@aL_: Would love to know details of your request. You want to link to the depthFrame (and all the depth data) from the skeleton frame. Have you already requested that on the forums (http://bit.ly/KinectSDKForums)? If so, we probably already have it tracked in our workitem db. If not, please do.
Relyea rocks...he just does.
Excellent, the IR light was driving me crazy. Also easier synching is very welcome. Sounds like good stuff, can't wait to dive in. Also can't wait for the proper Beta 2 though.
It would be nice to see some app examples related to the UX design section from CH9 Kinect event. How to swipe items in ListBox or navigate different pages.
@adam hecktman - thanks for the kind words. i marked your post as spam though...cause this isn't about me.
@shaggygi - love the idea. can you post that question to the forums (http://bit.ly/KinectSDKForums). likely somebody has example code. if not, we'll try to whip some up.
@Bas - would love to know about your sync goals...if we don't yet meet them. yes, looking forward to continued progress in beta 2 and beyond!
My sync goals currently are simply recording the video and depth frames and some structure that says which pixels in color space are player pixels. Currently I'm simply just pushing all frames onto two separate dictionaries (one for color frames and one for depth frames) as soon as I get them and then afterwards grabbing a frame from each if the timestamps match. Not the most elegant way probably, so I'm looking forward to that sample.
What about a 64-bit version?
@ZippyV - Yes, 64 bit support is in our plan.
For now, you can run on 64 bit windows as a 32 bit app.
Our driver has a 64-bit version already. The runtime doesn't yet.
If all goes well, we'll likely have a 64 bit release in our next beta.
Are there any plans to open up the Avatar Kinect algorithms? Face tracking (eyes, mouth, eyebrows) and finger tracking? Or provide skeleton tracking that works when sitting down?
I would love to be able to write code which recognizes hand gestures to manipulate windows, or just works tracking you when setting down.
@CKurt - I don't represent the Avatar Kinect team...will have to check.
Face tracking and finger tracking are interesting directions that you could imagine we could take the SDK. See FAQ #7 in NUI discussion forum (http://bit.ly/KinectSDKForums).
Skeletal tracking will be improved over time to work better for Windows oriented scenarios, including sitting down.
sorry, ive been having trouble posting on c9.. i've posted about it the forums and the guys said they would think about it i think it would be really helpful to have depth data in skeleton frames (at least as an option) because syncing the two will be really common i think
for example if i 'd like to do my own finger recognition i'd like to find use the skeleton data to find the general [depth] hand position and party on that
when do we get joint orientations?
@fo_shizzle- Haven't announced a timeframe for Joint orientations. It is a common request.
@Rob Relyea: thanks for the reply
at least I can read "it may come some day", and not "no chance, get an xdk"
I'll focus on other stuff in the meantime.
joint orientations would be nice