PocketXP wrote: Apple's tech is cool but may be considered 'old school' when compared to this approach.
They're attempting to show the benefits of using IR over the traditional capacitive touch panels.
As noted in the video, using a smartphone, TV remote or any IR pointer to control the UI is very cool.
A likely product from this is a "low-cost" Multi-touch TV Remote.
Bear in mind that an IR remote does not have the pinpoint focus of a laser pointer; it's diffuse. So, regardless of the resolution of the IR receiver array, IR reception from a distance will be as accurate as a BIG fat finger. Point-click-move UI control will only be possible in either close proximity to an average-sized screen or maybe from across the room from a 80+ inch screen. UNLESS it's capable of accurately detecting an IR point source from a distance.
If not, as with standard IR remote control from a distance, any IR remote in this scenario will still have to function via a binary protocol and will translate "next", "prev", "first", "last" buttons and arrow keys to manipulate the UI. One way to resolve this is to have the remote control device fitted with an electronically-controlled focusing lens and at the beginning of each remote control session have the device and computer communicate and perform a quick auto-focus function that will ensure that the IR beam is in perfect pinpoint focus relative to its distance from the screen. Then you can have accurate point-click-move UI manipulation. Another way would be to have the remote control itself have an embedded multi-touch system, as you mentioned, that transmits touch information to the screen via a standard IR binary protocol.
I'm no Apple fanboy, but Microsoft's technology may appear "old school" when compared to Apple's "next" approach, their full screen image sensor array technology. The apple screen will be able to see you... from across the room. If the image/object analysis is good enough it will be able to detect when you are pointing or drawing with your forefinger. Draw out an imaginary screen with your forefinger to allow the Apple screen to register the X/Y extent of your motions (or automatically and constantly adjust according to perceived shoulder width to allow and account for changes in proximity to the screen) then use multiple fingers, elbows, eyes, eyebrows, mouth shape to point-click-move, draw and communicate to your hearts content. This could also extend to multi-person interfaces down the road where the faces and gestures of a variable number of people control the UI and applications.
A hybrid of the two technologies may solve a lot of issues and create new opportunities that neither alone can address. I could see a cross-licensing agreement between Microsoft and Apple. If Apple's image sensor can vary or expand the light spectrum that they detect into IR then all that would need to be added to each pixel would be the IR transmitter element.
A cross-licensing agreement and possible cooperative research and development could save time and money leading to a more robust product and an industry standard rather than competing technologies that fragment the marketplace. Individually, if costs of the technologies are close to being equal then I imagine that Apple's future tech will find greater adoption as, on the surface and from pure speculation, it seems to facilitate all of the potential applications of MS MultiTouch and more.