Shaking some sense into using multiple Kinect's with Shake 'n' Sense
This is one of those weird things that you just wouldn't expect until you see it...
Shake n Sense Makes Kinects Work Together!
Microsoft Research has discovered that shaking Kinects, far from making them fall apart, makes them work together. See it in action in the video.
This is one of those ideas that once you have seen it you can't believe you didn't think of it first. The only barrier to thinking of it is that you might not be thinking big enough. If you find one Kinect with its depth camera sufficient, then you really won't be interested in this idea even though it is very clever. The idea of using more than on Kinect at a time extends what you can do by a lot and the idea isn't prohibitively expensive.
However there is a problem.
Multiple Kinects tend to interfere with one another. A Kinect measures the depth of a point by projecting a pattern of infrared dots into the scene and detecting how far they appear shifted due to parallax. This is great when there is only one Kinect but if you have more than one there is no way of separating out their dots. What this means is that one Kinect could project an infrared dot that another Kinect "sees" as its own and hence incorrectly estimates the distance.
The problem is that the Kinect light pattern isn't modulated in a way that lets one unit tell which dots belong to its projected pattern. Now the solution is obvious - modulate the patterns. This sounds difficult and probably involves changing the firmware - not according to Microsoft Research who say all you need to do is shake it.
The idea is simple - add a motor with an offset weight. Run the motor so that it shakes the Kinect and the result is an almost magical improvement in multisensor detection accuracy.
The reason it works is that the shaking moves ...
Project Information URL: http://www.i-programmer.info/news/194-kinect/3869-shake-n-sense-makes-kinects-work-together.html
Shake 'n' Sense is a novel yet simple mechanical technique for mitigating the interference when two or more Kinect cameras point at the same part of a physical scene. The technique is particularly useful for Kinect, where the structured light source is not modulated. It requires only mechanical augmentation of the Kinect, without any need to modify the internal electronics, firmware or associated host software
We present a method for reducing interference between multiple structured light-based depth sensors operating in the same spectrum with rigidly attached projectors and cameras. A small amount of motion is applied to a subset of the sensors so that each unit sees its own projected pattern sharply, but sees a blurred version of the patterns of other units. If high spacial frequency patterns are used, each sensor sees its own pattern with higher contrast than the patterns of other units, resulting in simplified pattern disambiguation.
An analysis of this method is presented for a group of commodity Microsoft Kinect color-plus-depth sensors with overlapping views. We demonstrate that applying a small vibration with a simple motor to a subset of the Kinect sensors results in reduced interference, as manifested as holes and noise in the depth maps. Using an array of six Kinects, our system reduced interference-related missing data from from 16.6% to 1.4% of the total pixels. Another experiment with three Kinects showed an 82.2% percent reduction in the measurement error introduced by interference.
A side-effect is blurring in the color images of the moving units, which is mitigated with post-processing. We believe our technique will allow inexpensive commodity depth sensors to form the basis of dense large-scale capture systems.
Project Information URL: http://www.cs.unc.edu/~fuchs/kinect_VR_2012.pdf