Coffeehouse Thread

11 posts

Forum Read Only

This forum has been made read only by the site admins. No new threads or comments can be added.

What is OSV.Next?

Back to Forum: Coffeehouse
  • User profile image
    Ian2

    So I finally got hold of an Leap Motion yesterday and played with it for an hour or two (all I had available). 

    My initial thoughts are that it does what is says on the tin and is pretty good at doing it to boot.

    But ....

    To me this kind of interaction is crying out for its own Operating System UI - which I don't believe is a logical progression from where we are today (with Windows 8 or Unix derivative UI's).

    I have an idea what this might be but have struggled to put it down on paper:

    Watch the video in the link above and then the following will maybe make more sense

     

    To me it seems like our hands and arms will become 'virtually extended' such that they exist behind the screen within a 3 dimensional (stereoscopic) plain. 

    A new set of Applications need to be developed that exist within this 3d plain and are designed for interaction by our (virtual) hands and fingers. (Noting that most interaction will probably take place with minimal movement by our fingers).

    I believe that this will feel very natural, perhaps in the same way that a car becomes an extension of our bodies when we drive?

    I guess audio commands/feedback and some kind of tactile feedback to our fingers might also play a part?

     

    10 years time?

     

    And before someone says 'Minority Report' I would say yes - but that was an Application designed to locate and view documents (if memory serves) - but there must be plenty of other functional uses for this style of interface?

     

     

  • User profile image
    Blue Ink

    @Ian2: Amazing. As a regular CAD user, I cannot wait to see what this can do.

    This said, I'm kind of worried about how well this would work for the general public, looks like there's going to be a "palm rejection" problem on steroids here. Either users get used to move their hands off the sensing area, or the UI needs to be very picky (or very smart) about intentional and valid gestures. Either way I expect it will need some training.

    With a little luck, the coolness of "casting spells" to your PC will be a big enough incentive.

    As for the audio commands and feedback, the technology never worked too well for PCs; one thing is to bark commands to an xbox in your living room, another is to have to say what you are doing in public. I doubt that's ever going to change, until they nail subvocalization.

    Once that happens, we'll have proven Clarke's third law again: "any sufficiently advanced technology is indistinguishable from magic".

  • User profile image
    exoteric

    Minority Report? That was years after Johnny Mnemonic. Smiley Albeit in Mnemonic he had to wear gloves. It's always great when reality exceeds fiction.

  • User profile image
    Bas

    It definitelylooks cool, but I'm wondering what people will actually do with it. I'm sure people will come up with awesome unexpected uses like with kinect, but for now I'm struggling to see what that will be.

    I'm not too sure about this whole "moving 3D objects around with your hands" thing in a general purpose OS/application, though. For CAD users, sure, but for others it just seems like too complicated a solution.

  • User profile image
    Ian2

    @Bas: You might be right but I would love to see it to determine usability etc.

  • User profile image
    DeathBy​VisualStudio

    3D desktops have been tried in the past with limited success. Maybe interfaces like this will finally allow a UI to be developed that utilizes the z-axis so more can be packed into a limited space.

    Bas is right though; it may be too much for users to handle for general purpose uses. Just look at people's reaction to the changes made to Windows for W8 -- not great. Then again people seems to take to the Kinect pretty quickly.

  • User profile image
    Richard.Hein

    There's so much potential, but you're absolutely right, things are going to have to be redesigned at the OS level to maximize the potential.  It took years for touch to get into the OS and it's obvious that the traditional desktop experience isn't that great for it, and with this kind of precision 3D touchless interface, it will take a considerable amount of time for things to evolve.

    At the base of the OS, the input itself has to respect the degrees of freedom allowed by this kind of UI.  I mean, things like fonts and writing will have to be changed.  I think that natural language will evolve in huge leaps, as we abandon keyboard input as the primary input and a new generation of calligraphy emerges in 3D space.  People will be able to rapidly create 3D models, like manipulating clay, to express thoughts and ideas.  They will be able to animate it and make it interactive.  Text is not going away, but I think that a new world of media as a language will emerge.  That has to be baked into the OS.  A new kind of shell that makes 3D models first class.

    EDIT:   I also think that technology like Photosynth will play a huge role, as it will allow hyperlinking images and 3D modeling.  This has already been demonstrated by Bing and Photosynth, i.e. an image in a picture links to a 3D model made from pictures of the same source.  So, imagine if I model something with some basic 2D outline, or even 3D virtual clay, like the shape of house I want to search for on Bing maps, and when I find it, I can create a hyperlink to related shapes.

  • User profile image
    magicalclick

    I hope Surface will have this build-in, so, don't need any keyboard cover anymore. And this works better typing on my lap.

    Leaving WM on 5/2018 if no apps, no dedicated billboards where I drive, no Store name.
    Last modified
  • User profile image
    BitFlipper

    This could work well in some types of games.

  • User profile image
    DCMonkey

    Seems like it would beat reaching out to touch your vertical touchscreen desktop display as a secondary input mechanism. But the airspace would have to be be pretty close to the keyboard and mouse. Maybe incorporate a foot pedal to activate it. Or have buttons on the keyboard and mouse that activate detection of the opposite hand.

  • User profile image
    Ian2

    I suspect that someone will develop a single App, maybe something CAD oriented, that runs on existing O/S's well - this achieves great success and spurs others on to explore the new opportunities in other ways, again with some success.  Someone then takes the new CAD App and removes the O/S layer replacing it with something that fits better with the new demands, and the App runs better.  Someone else then takes the new underlying App layer and writes something different on top of it. Eventually a more generic intermediate layer emerges (Silverlight / Flash like?) and becomes some sort of standard.  In turn that is replaced with something in the O/S., which later becomes the O/S.   I don't think 10 years is overly optimistic.

Conversation locked

This conversation has been locked by the site admins. No new comments can be made.