I wonder if the tech could be scaled down to work on handheld devices. Multi-touch is all very nice, but it would be great if you didn't have to touch the screen for it work.
Unless they are using some really unconventional algorithm then I say no, because the issue with machine vision algorithms tend to be memory size and memory bandwidth (and also CPU to an extent), and handheld's current resources don't cut it for most things except the most trivial.
I think even from Project Natal they are using some of the Xbox 360's resources anyway, because it would seem very expensive to put all this processing on a ASIC. This isn't like a Wiimote which is doing something very trivial algorithmically. But if Natal is done in software it would make writing a driver quite hard.
Really the demo they did makes it look like what they are doing is easy, but it's really really hard. TBH I wouldn't be suprised if it didn't work nearly as well as does in the demo when it's finally released.