HeadTexter... Using the Intel Perceptual Computing SDK, C# and your noggin to text...
- Posted: Mar 11, 2013 at 6:00 AM
- 6,712 Views
- 3 Comments
Loading User Information from Channel 9
Something went wrong getting user information from Channel 9
Loading User Information from MSDN
Something went wrong getting user information from MSDN
Loading Visual Studio Achievements
Something went wrong getting the Visual Studio Achievements
I thought today's project interesting and kind of cool. It uses the Intel Perceptual Computing SDK and your laptops (or PC's) camera to convert head gestures/turns into text.
And who doesn't like a new SDK to play with?
HeadTexter is a simple App targeted to convert head movement to English text. It uses Intel Perceptual Computing SDK for head tracking and is a contest entry in recently concluded perceptual computing challenge.
Before you read the rest of article you must know the reasons behind this madness of trying to convert the head movement to text. It is to carry a research and give a direction for communication to Alzheimer's disease.
Alzheimer's Disease is a disease which is mainly caused by stroke. It paralyzes the patients and they looses capability to move any other parts of their body other than their head. The head movement also becomes limited with time and only way they can convey any message is through their eyes. So I basically wanted to build a system that converts the eye movement to text. But the limitations of Beta SDK in face tracking and more in eye tracking has made me to shift the design a bit towards face tracking.
The interaction may not be quite realistic for Alzheimer's patients, but somehow it had to be started at a point. So I decided to build the basic framework with head movement ( which turned out more complicated than I thought it would be) and to transform the work to eye tracking once the sdk issues are fixed. That would mean some simple changes in the transform part and should come easy.
Now coming to my decision of writing a tutorial on head tracking based system rather than hand gesture work is pretty simple. It works with any normal web cam. Yes that is right. All of you can actually install the sdk and get on with the programming without a creative gesture camera. So my motive of introducing Intel Perceptual Computing SDK before the community ( especially those crazy C#'rs was to give everybody an opportunity and a simple walk through with the sdk which they can carry forward).
So what basically we are learning here in this tutorial?
a) Working with Perceptual Computing SDK
b) Head Tracking system with Perceptual Computing SDK
c) Do something funny ( or meaningful) with the tracked data.
d) Learn how to use a 175 years old computer concept effectively.
Let's not waste any more digital bytes in clarifying my motives or article's objective and start with what we do best! Code.
Using the code
First download the sdk from Perceptual Computing SDK Download Page
Start with SDK:
The SDK is mainly written in C++ as you might expect it to due to speed constraints and what you get for C# is a managed dll. So Start a project and add a reference to libpixclr.dll located in sdk/bin/X64 or sdk/bin/X86 folder. If you are really that inclined to a 64 bit application then you must also change the project properties to 64 bit. If you select x86, don't forget to change the project properties to x86. "Any CPU" will not work. Unlike some other solutions like Microsoft's Ink technology that does work only in x86 architecture and the applications fails in x64, no such worries with this one. Select x86 and it runs well on both the architecture.
All of us who are well versed with OpenCV and C++ style of coding use an infinite loop for acquiring data. However as it had to be C#, I wanted the solution to be BackgroundWorker based. So what we will do is come out of those for(; stuff of OpenCv and capture the frames from DoWork, processing would be performed from progress changed. UI and SDK threads are entirely different stack, so we will use a delegate to update UI with the result from SDk thread.
The first thing you got to do with any PerC work is create a session with sdk. So you need an object of PXCMSession. Let us call it session;
So install the Perceptual Computing SDK Download Page (I used Beta 3). Don't worry about not having the mentioned Camera, for this project, your built in camera should work (at least it did for me).
Download the project's source and fire it up in Visual Studio. In my case, I had to switch to the Debug configuration for the project to find the references, compile and run
And ran it did (you've got to love my "What am I doing! It's Saturday!" look
Make sure you read the rest of the CodeProject post to see how he uses the SDK and some of the hoops he had to jump through...
Is this going to set the world on fire? Doubt it. Is this something that you could do with OpenCV? Probably. But come on! This is a new cool SDK too play with and bend to your will!!!