Capturing Facial Expressions with the Kinect for Windows v2
- Posted: Jul 24, 2014 at 6:00AM
- 11,260 views
- 2 comments
Loading user information from Channel 9
Something went wrong getting user information from Channel 9
Loading user information from MSDN
Something went wrong getting user information from MSDN
Loading Visual Studio Achievements
Something went wrong getting the Visual Studio Achievements
Today Tom Kerkhove, Kinect for Windows MVP, shows off one of the coolest things in the new Kinect for Windows v2 Device and SDK, how we now have built in facial expression support...
One of the biggest feature requests was the ability to track the expressions of the users. Today I’m happy to tell you that this is now available in the
alpha[UPDATE (15/07/2014) – The sample is updated based on the public preview SDK.] SDK thanks to the facetracking!
In this post I will walk you through the steps to display the expressions for one user but this is possible for all the tracked persons!
I developed a small template that displays the camera so you can follow along & is available here.
Setting up expression tracking is pretty easy – We just need to set up body tracking, assign a FaceFrameSource to it and start processing the results. This requires us to add two references – Microsoft.Kinect for body tracking & Microsoft.Kinect.Face for the face tracking.
As I mentioned in my basic overview we need to create a BodyFrameReader to start receiving BodyFrameReferences in the FrameArrived event.
Testing the application
When you give the application a spin this is how it should look like –
In this post I illustrated how easy it is to set up expression tracking for one person and what it allows you to do f.e. user feedback when they see a new product at a conference.
Keep in mind that the sensor is able to track up to six persons and your algorithm should support this as well.
Project Source URL: https://github.com/KinectingForWindows?tab=repositories
Other posts from Tom you might also find interesting;