BitFlipper said:CreamFilling512 said:*snip*
That is surprising, because everything I have read up until now stated it was a time-of-flight camera. Here is an example. What changed?
Also, saying "it projects a grid on the scene in near-infrared light" doesn't explain how it works. At all. How does it come up with a depth value for each pixel? I am not saying it doesn't do that, I am just saying a huge amount of other info is left out that doesn't give us any idea of how this actually works.
The link you provided doesn't help much either. It says another sensor "reads and then interprets" the grid. Great, doesn't actually give any explanation of how it works. Reading depth is much more complex than ready light intensity.
Microsoft probably bought up the ToF company thinking "this'll be useful" before realising it wasn't.
The grid approach works if the resolution is high enough: assuming the grid isn't projected using coherent light, objects further away will have wider grid spacing on them than objects that are closer.
...that's just my hypothesis.