Understanding the Kinect Coordinate Mapping, with a little help from Vangos Pterneas
Vangos Pterneas, Friend of the Gallery, has posted another great article, this time helping all of us understand the Kinect Coordinate Mapping for v1 AND v2...
Other recent Gallery Posts for Vangos Pterneas;
- Looking at the Kinect for Windows v2
- Color, depth and infrared streams in the Kinect for Windows v2 world (here's how)
- Kinect gestures, an implementation walk through
- Put down the measuring tape. Using the Kinect to find your height
- Body Tracking with the Kinect for Windows v2
- Kinect for Windows v2 will make you green (with envy at its background removal features)
Understanding Kinect Coordinate Mapping
This is another post I publish after getting some good feedback from my blog subscribers. Seems that a lot of people have a problem in common when creating Kinect projects: how they can properly project data on top of the color and depth streams.
As you probably know, Kinect integrates a few sensors into a single device:
- An RGB color camera – 640×480 in version 1, 1920×1080 in version 2
- A depth sensor – 320×240 in v1, 512×424 in v2
- An infrared sensor – 512×424 in v2
These sensors have different resolutions and are not perfectly aligned, so their view areas differ. It is obvious, for example, that the RGB camera covers a wider area than the depth and infrared cameras. Moreover, elements visible from one camera may not be visible from the others. Here’s how the same area can be viewed by the different sensors:
Suppose we want to project the human body joints on top of the color image. Body tracking is performed using the depth sensor, so the coordinates (X, Y, Z) of the body points are correctly aligned with the depth frame only. If you try to project the same body joint coordinates on top of the color frame, you’ll find out that the skeleton is totally out of place:
Of course, Microsoft is aware of this, so the SDK comes with a handy utility, named CoordinateMapper. CoordinateMapper’s job is to identify whether a point from the 3D space corresponds to a point in the color or depth 2D space – and vice-versa. CoordinateMapper is a property of the KinectSensor class, so it is tight to each Kinect sensor instance.
You can download a test project from GitHub and check how CoordinateMapper is used. To understand it more thoroughly, continue reading this tutorial.
Project Information URL: http://pterneas.com/2014/05/06/understanding-kinect-coordinate-mapping/
This conversation has been locked by the site admins. No new comments can be made.