You can tell that something is going mainstream in the Microsoft world when it gets a regular column spot in the MSDN Magazine. Yep, I'm saying that the Kinect is getting a spot, at least in the online edition of MSDN Magazine.
Leland Holmquest, whom we last mentioned in Project Lily and Context-Aware Dialogue with Kinect, has just kicked off the new Kinect column with an appropriate article...
Starting to Develop with Kinect
Welcome to the inaugural article devoted to developing Kinect for Windows applications. In the April and May issues of MSDN Magazine, I introduced you to Lily, my virtual assistant (https://msdn.microsoft.com/en-us/magazine/hh882450.aspx and https://msdn.microsoft.com/en-us/magazine/hh975374.aspx). In those articles, I demonstrated how to use some of the capabilities described in the Kinect for Windows SDK (Beta 2) to create a virtual assistant that uses multimodal communication. To determine the action a virtual assistant like Lily executes, the user points to an option while speaking a command. The combination of the two modes of communication—the gesture and the audio command—determines the action.
In this article, I’ll start with the basics and run through a few how-to’s for starting to develop with Kinect. First, however, I want to offer a note of encouragement: If you assume that programming the Kinect and incorporating natural user interface into your applications is beyond your capability, think again. You’ll soon find out how to use the skeleton-tracking capability of the Kinect in a Windows Presentation Foundation (WPF) application without writing a single line of code! It doesn’t get any easier than that.
Project Information URL: https://msdn.microsoft.com/en-us/magazine/jj159883.aspx