That is surprising, because everything I have read up until now stated it was a time-of-flight camera.
Here is an example. What changed?
Also, saying "it projects a grid on the scene in near-infrared light" doesn't explain how it works. At all. How does it come up with a depth value for each pixel? I am not saying it doesn't do that, I am just saying a huge amount of other info is left out
that doesn't give us any idea of how this actually works.
The link you provided doesn't help much either. It says another sensor "reads and then interprets" the grid. Great, doesn't actually give any explanation of how it works. Reading depth is much more complex than ready light intensity.