Face Swap... with a little help from the Kinect for Windows v2
- Posted: Mar 26, 2014 at 6:00 AM
- 10,606 Views
- 1 Comment
Loading User Information from Channel 9
Something went wrong getting user information from Channel 9
Loading User Information from MSDN
Something went wrong getting user information from MSDN
Loading Visual Studio Achievements
Something went wrong getting the Visual Studio Achievements
Today's inspirational project is one that I really wish we had the source or download for, and while I try to only focus on projects with such, this was just too cool not to share. It sure wets our appetite for Apache's new library...
Ever wish you looked like someone else? Maybe Brad Pitt or Jennifer Lawrence? Well, just get Brad or Jennifer in the same room with you, turn on the Kinect for Windows v2 sensor, and presto: you can swap your mug for theirs (and vice versa, of course). Don’t believe it? Then take a look at this cool video from Apache, in which two developers happily trade faces.
According to Adam Vahed, managing director at Apache, the ability of the Kinect for Windows v2 sensor and SDK to track multiple bodies was essential to this project, as the solution needed to track the head position of both users. In fact, Adam rates the ability to perform full-skeletal tracking of multiple bodies as the Kinect for Windows v2 sensor’s most exciting feature, observing that it “opens up so many possibilities for shared experiences and greater levels of game play in the experiences we create.”
Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2014/03/17/swap-your-face-really.aspx
An Apache Labs project to demonstrate dynamic face swapping using the Kinect.
As the Kinect doesn't track head rotation it means that both users need to be looking in the same direction for the illusion to work best.
This uses the Apache Kinect Library (alpha version), which integrates the Kinect for Windows v2 (dev preview) sensor and SDK with the Unity3D gaming engine.
The 1920 x 1080 colour feed from the Kinect is pushed to a Unity Texture and is displayed using an orthographic camera.
The users' head positions in 3D space are mapped to the relevant portions of the 2D video feed and these are then cut out and applied to two planes using an oval mask to blur the edges.
Please note: This is preliminary software and/or hardware and APIs are preliminary and subject to change.