Tom Kerkhove, Microsoft MVP and Friend of the Gallery, continues his recent work with the Kinect for Windows v2 Device and SDK, Capturing Facial Expressions with the Kinect for Windows v2, today going beyond just capturing facial expressions, but actually analyzing them...
In one of my previous post, read it here, I walked you through building an application that will track someone’s face and display the expressions of the person.
Cool but this has no added value – How happy was the person? Was he/she interested? Did he/she wear glasses?
In this post we will create an application that will perform face tracking for all tracked persons, a max of 6 persons. Afterwards it will be analyzed so these statistics will tell you for how many percentage of the time he/she looked away or how likely he/she was wearing glasses,…
As of this writing the Kinect for Windows SDK is still in Public Preview (v1409) and can be found here.
Here is an example of the analytics that we will generate.
We will use the concept of trackers that will keep track of the expressions for each tracked body called a FaceTracker. This FaceTracker will act as conductor that will manage a set of trackers for each Face Feature we want to track called a FaceFeatureTracker. This means that the FaceTracker will receive all the FaceFrames and pass them to the FaceFeatureTrackers.
Once a certain person leaves the scene, the tracker will notify our application and provide us with a set of analytics.
Our application will be a dummy that just creates the trackers and waits until it notifies him that analytics are available. These analytics will then be saved on disk.
I will briefly talk about the flow I used for this application without going in-depth.
The basics from my previous post still apply regarding project setup, build events, etc.
We will start by creating the constructor for the FaceTracker where we pass in the body ID, the requested features to track and our KinectSensor. There after we will create a timestamp when we are starting so we can calculate the duration.
As we’ve seen in my previous post we will need a FaceFrameSource & FaceFrameReader so we can start receiving frames.
Building this application was pretty easy to do – Just create some trackers that are in charge tracking all the occurrences. There after we used the data to mold it in some analytics that illustrate what the expressions were for that person.
Although the concept is very simple it can be very powerful for instance tracking the emotional responses when conference attendees see your prototype.
You can download my code here so you can try it yourself!
Project Information URL: http://www.kinectingforwindows.com/2014/09/18/analysing-expressions-with-face-tracking/
Project Source URL: https://github.com/KinectingForWindows/G2KFaceAnalytics
Other posts from Tom you might also find interesting;
- Capturing Facial Expressions with the Kinect for Windows v2
- "Frames Monitor" Utility from Tom Kerkhove
- "Comparing MultiSourceFrameReader and XSourceFrameReader"
- Kinect to a Quad... Kinecting AR Drone series start
- Kinecting AR Drone - Part 1
- "Exploring the Kinect Developer Toolkit Browser"
- Kinect for Windows – What’s new, a view from a Kinect for Windows MVP
- Build on the Kinect for Windows v2
- Kinect Television - Putting the You in the TV...
- Getting down and dirty coding with the Kinect for Windows v2
Comments have been closed since this content was published more than 30 days ago, but if you'd like to send us feedback you can Contact Us.