TWC9: Nikola, Vlad, Windows 10 Update, More Azure, Kinect SDK 2.0 and more...

Since these two posts cover the same basic thing, the new Face capability in the Windows For Windows v2 SDK, an intro from Vangos Pterneas and a little more details from Mike Taulty's I thought it would make sense to highlight them together.
What if your computer could detect your eyes, nose, and mouth? What if an application could understand your facial expressions? What if you could build such applications with minimal effort? Until today, if you wanted to create an accurate mechanism for annotating real-time facial characteristics, you should play with OpenCV and spend a ton of time experimenting with various algorithms and advanced machine vision concepts.
Luckily for us, here comes Kinect for Windows version 2 to save the day.
One of the most exciting features of Kinect 2 is the new and drastically improved Face API. Using this new API, we’ll create a simple Windows application that will understand people’s expressions. Watch the following video to see what we’ll develop:
Read on for the tutorial.
Note: Kinect provides two ways to access facial characteristics: Face Basics API and HD Face API. The first one lets us access the most common features, such as the position of the eyes, nose, and mouth, as well as the facial expressions. HD Face, on the other hand, lets us access a richer and more complex collection of facial points. We’ll examine Face Basics in this article and HD Face on the next blog post.
Face Basics features
Here are the main features of the Face Basics API:
- Detection of facial points in the 2D space
- Left & right eyes
- Nose
- Mouth
- Head rectangle
- Detection of expressions
- Happy
- Left/Right eye open
- Left/right eye closed
- Engagement
- Looking away
- Detection of accessories
- Glasses
Prerequisites
To create, build, and run this app, you’ll need the following:
Project Information URL: http://pterneas.com/2014/12/21/kinect-2-face-basics/
Project Source URL: https://github.com/vangos/kinect-2-face-basics
Contact Information:
In looking across the previous posts that I’d made on Kinect for Windows V2, one of the areas that I hadn’t touched upon was how the Kinect surfaces data about the user’s face and so I thought I’d experiment with that a little here.
There are two functional areas when it comes to working with faces – there’s the regular facial data as represented by the FaceFrameResult class and the various properties hanging off it which can tell you about things like;
- whether the user is happy (this seems pretty deep to me
)
- whether the eyes are open/closed
- whether the mouth is open/closed
- etc.
and that’s introduced on this page in the MSDN documentation.
Then there’s the “high definition” face tracking functionality (introduced here on MSDN) which I’d categorise as more advanced and relates to determining the shape of a user’s face and the way in which the face is moving.
For this post, I’m going to duck the “high definition” functionality and work with the simpler face tracking.
Getting Started – Showing Video
In order to get started, I thought I’d make a Windows Store application and have it display video (or “color”) frames from the sensor and, as I’ve done in some previous posts, I thought I’d use Win2D in order to do my drawing.
...
Getting a Few More Facial Features
So far, I’ve relied purely on obtaining the bounding box of the face represented in the color space co-orindates but there’s a lot more information that facial tracking gives me. Specifically;
- The bounding box of the face in infra-red space
- The positions of the facial features in either color or infra-red space
- Left eye, right eye, nose, mouth left corner, mouth right corner.
- Whether the eyes are closed or open
- Whether the mouth is closed/open
- Whether the face is wearing glasses
- Whether the face is engaged
- Whether the face is looking away
- The rotation of the face
and so I could do a lot more here.
It’s probably easier to do something with all those features using some kind of XAML-based user control where visual states can be used to toggle various pieces of a face on/off but given that I’d used Win2D up to this point I wondered whether I could just add a few more images to my project that represented overlays on the face above
...
Project Information URL: http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2014/12/31/kinect-for-windows-v2-sdk-playing-with-faces.aspx
Project Source URL: http://www.mtaulty.com/downloads/KinectFaceFrankOMatic.zip
Contact Information:
Follow @CH9
Follow @Coding4Fun
Follow @KinectWindows
Follow @gduncan411
This conversation has been locked by the site admins. No new comments can be made.