A story about the Kinect’s recent SDK launch
Today's entry is journal of the recent past for Kinect, the Kinect SDK launch and the experience of a UCSC student getting up to speed on it...
Kinecting the Dots: The Kinect’s Recent SDK Launch
On June 16th, I and 29 other developers (drawn from graduate programs and affiliates) converged in Redmond for a secret 24 hour CodeCamp hosted by Microsoft Research. The source of the secrecy was the highly anticipated launch of the new Kinect for Windows SDK beta. (SDK = Software Development Kit.) The Kinect is a peripheral developed for Microsoft’s Xbox 360 that captures depth information in addition to a color video stream and sound. Using this data and proprietary algorithms, Kinect games on the Xbox 360 (and now on Windows) are able to locate and track multiple people in the typical living room, including addressing the problems of separating figures from objects like chairs and non-participating people who might be sitting on the couch behind you.
This was (and is) a hard problem...
The Microsoft-sponsored 24-hour CodeCamp before the launch is not unusual as a format; it was chosen to demonstrate how easy the SDK was to develop for and to put real faces on that development (as opposed to the Kinect’s release itself with only a few select games) . There was even a 24 hour Kinect hack-a-thon hosted in April by the San Francisco bay area based 3D Vision and Kinect Hacking MeetUp Group. Global Game Jam empowers teams to create a fully working game each year in the span of a weekend. But this event was a large step into the unknown; like many of the other developers, I hadn’t worked with the Kinect before setting foot there.
After getting an introduction to the software, we formed into teams of 2-3 people and decided on projects. My group focused on an interactive whiteboarding project called “Kinecting Ideas.” Kinecting Ideas is designed to be a collaborative application that combines the spatial and context elements of Prezi (also see ProfHacker posts by Ethan Watrall and Caro Pinto) with the multi-user nature of video conferences and the essence of whiteboarding. A facilitator would direct with their hands symbols or “ideas” either pre-assembled or contributed from participants. They would appear in front of a virtual whiteboard which would contain the content of the meeting. The skeletal tracking isn’t very good for handling text or drawings, but overt gestures such as positioning your hand or standing are a good fit for its capabilities, and often happen in such meetings naturally.
Project Information URL: http://chronicle.com/blogs/profhacker/kinecting-the-dots-the-kinects-recent-sdk-launch/35428