Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

BigMTBrain

BigMTBrain BigMTBrain

Niner since 2006

  • Inside MultiTouch: Team, Demo, Lab Tour

    PocketXP wrote:
     Apple's tech is cool but may be considered 'old school' when compared to this approach.

    They're attempting to show the benefits of using IR over the traditional capacitive touch panels.
     
    As noted in the video, using a smartphone, TV remote or any IR pointer to control the UI is very cool.

    A likely product from this is a "low-cost" Multi-touch TV Remote.


    Bear in mind that an IR remote does not have the pinpoint focus of a laser pointer; it's diffuse. So, regardless of the resolution of the IR receiver array, IR reception from a distance will be as accurate as a BIG fat finger. Point-click-move UI control will only be possible in either close proximity to an average-sized screen or maybe from across the room from a 80+ inch screen. UNLESS it's capable of accurately detecting an IR point source from a distance.

    If not, as with standard IR remote control from a distance, any IR remote in this scenario will still have to function via a binary protocol and will translate "next", "prev", "first", "last" buttons and arrow keys to manipulate the UI. One way to resolve this is to have the remote control device fitted with an electronically-controlled focusing lens and at the beginning of each remote control session have the device and computer communicate and perform a quick auto-focus function that will ensure that the IR beam is in perfect pinpoint focus relative to its distance from the screen. Then you can have accurate point-click-move UI manipulation. Another way would be to have the remote control itself have an embedded multi-touch system, as you mentioned, that transmits touch information to the screen via a standard IR binary protocol.

    I'm no Apple fanboy, but Microsoft's technology may appear "old school" when compared to Apple's "next" approach, their full screen image sensor array technology. The apple screen will be able to see you... from across the room. If the image/object analysis is good enough it will be able to detect when you are pointing or drawing with your forefinger. Draw out an imaginary screen with your forefinger to allow the Apple screen to register the X/Y extent of your motions (or automatically and constantly adjust according to perceived shoulder width to allow and account for changes in proximity to the screen) then use multiple fingers, elbows, eyes, eyebrows, mouth shape to point-click-move, draw and communicate to your hearts content. This could also extend to multi-person interfaces down the road where the faces and gestures of a variable number of people control the UI and applications.

    A hybrid of the two technologies may solve a lot of issues and create new opportunities that neither alone can address. I could see a cross-licensing agreement between Microsoft and Apple. If Apple's image sensor can vary or expand the light spectrum that they detect into IR then all that would need to be added to each pixel would be the IR transmitter element.

    A cross-licensing agreement and possible cooperative research and development could save time and money leading to a more robust product and an industry standard rather than competing technologies that fragment the marketplace. Individually, if costs of the technologies are close to being equal then I imagine that Apple's future tech will find greater adoption as, on the surface and from pure speculation, it seems to facilitate all of the potential applications of MS MultiTouch and more.
  • PhotoSynth: What. How. Why.

    The phrase "paradigm shift" is often misused, over used and abused. It takes only the slightest imagination and awareness to see that the insights that led to Photosynth are indeed the beginnings of a paradigm shift, part of the steadily accelerating advance towards the so-called technological singularity.

    While much of its application is embodied and embedded in the web, I believe the strength of the impact of Photosynth and its cloud of applications will be at least that of Google and perhaps even the web itself. It is a bold statement, but it will even rank higher than the advent of television.

    It's not as though pieces of this technology haven't been brewing for quite some time in the minds of many (including my big empty brain). But sometimes magic happens... someone asks "I wonder if..." and proceeds to test their hypothesis. Others recognize the discovery's significance and the future is changed. Not in a small evolutionary step, but in a huge revolutionary flight. And it comes in a rush and a flash as connections are made in the imagination of another, then another, and still another.

    While the web brought us connections and information, and Google brought us needles from the haystack, nothng is more potent than immediately touching the senses directly without a process of interpretation. Photosynth will do that; it will bring us the world, visually. And even more...

    The universe is a hierarchy of emergent systems, including each of us individually and our emergent societies. Photosynth will modify us individually and socially. And even more... it will bring massive and coherent sight and visual correlation across space and time (omnipresence, or at least omni-sightedness) to whatever that next level of emergence is, or is shaping up to be.

    Whew! Got carried away there... um, maybe.

    Now, be sure to take photos with those 15 second sound bites. They'll create a wash of 3D-aural delight when the similarly registered bells toll in the cathedral square or the sax man plays his tune as you windowshop down the mag-mile in Chicago. Oh, and tell them to hurry and get those bio- and enviro- sensors embedded in the next gen cellphones--man does not virtualize by sight alone!

    >-Edit-<

    Well, hold on... this just in... Reported Nov. 1st, 06 on ScienceDaily:

    Researchers Teach Computers How To Name Images By 'Thinking'

    Penn State researchers have "taught" computers how to interpret images using a vocabulary of up to 330 English words, so that a computer can describe a photograph of two polo players, for instance, as "sport," "people," "horse," "polo."

    Read full article here.

    That combined with this, even better.