Capturing Facial Expressions with the Kinect for Windows v2


Today Tom Kerkhove, Kinect for Windows MVP, shows off one of the coolest things in the new Kinect for Windows v2 Device and SDK, how we now have built in facial expression support...

First look at Expressions – Displaying expressions for a tracked person

One of the biggest feature requests was the ability to track the expressions of the users. Today I’m happy to tell you that this is now available in the alpha [UPDATE (15/07/2014) – The sample is updated based on the public preview SDK.] SDK thanks to the facetracking!

In this post I will walk you through the steps to display the expressions for one user but this is possible for all the tracked persons!


I developed a small template that displays the camera so you can follow along & is available here.


Setting up expression tracking is pretty easy – We just need to set up body tracking, assign a FaceFrameSource to it and start processing the results. This requires us to add two references – Microsoft.Kinect for body tracking & Microsoft.Kinect.Face for the face tracking.

As I mentioned in my basic overview we need to create a BodyFrameReader to start receiving BodyFrameReferences in the FrameArrived event.


Testing the application

When you give the application a spin this is how it should look like –



In this post I illustrated how easy it is to set up expression tracking for one person and what it allows you to do f.e. user feedback when they see a new product at a conference.

Keep in mind that the sensor is able to track up to six persons and your algorithm should support this as well.


[Click through for all the code, the source links and more]

Project Information URL:

Project Source URL:

Contact Information:

Other posts from Tom you might also find interesting;