Today's project provides something that you see happen when using the Kinect on the Xbox 360, when it scans up and down, adjusting, calibrating itself. Yet this isn't a capability I've seen implemented for the Kinect for Windows SDK or associated applications. Until now...
This program adjusts Kinect to better see the user, this way the user doesn’t have to move back or forward (unless Kinect’s angles aren’t enough to see the user). I think this is a useful thing to have when you start your Kinect application.
The code is very simple to follow. Use Kinect’s elevation angle to change the Kinect’s “view” and track the user in sight. For each angle count how many joints Kinect is tracking and save that angle as the best if we reach a new maximum number of tracked joints.
You may notice two things about the code:
- First is that I’m using time to control between Kinect movements. The reason for this is that if we try to compare Kinect with the angle we set, we might not get the same value for different reasons (Kinect sensor is not 100% accurate or it couldn’t physically rotate the sensor to that exact angle).
- And second is the fact that I’m only scanning the skeleton for some angles. I don’t know if this is a Kinect’s limitation or if I did something wrong in the code but I couldn’t track any joints while Kinect was moving. So what I did was move the sensor between some angles and wait there a bit to count the bones for each one of those angles.
- it only supports 1 user at this moment;
- the scanning process is not ideal.
Project Information URL: http://n0n4m3.codingcorner.net/?p=854
Project Source URL: http://n0n4m3.codingcorner.net/wp-content/uploads/2012/05/KinectCalibration.zip