This week's theme is going to be very loosely based on "Presentations." Today we present a inspirational project that merges the Kinect's 3D gesture recognition with 3D models... (I know, I know, it's a stretch but Roberto emailed us about this project and we give suggestions from the community a much higher posting priority.)
Fusion4D is an innovative 3D user interface that lets users interact with 3D objects as if the objects were in their hands, allowing them to move, rotate and scale the objects, explode them into its parts, and even navigate in time to see what the objects will look like in the past and future.
Fusion4D is simple to use: the user only has to wear the 3D glasses and use speech commands and hand gestures to manipulate the objects. Besides that, the system uses low cost devices, such as Kinect, and it doesn't require special displays for the 3D images.
How it works
Fusion4D uses 3D images and a Kinect device to capture the user's skeleton and voice. Therefore, to interact with the system, all you need to do is wear a pair of glasses and start interacting as if the object were in the real world.
The user interacts with the system using speech commands in English. Those commands allow users to select the object and manipulation mode.
The "grab" command selects the object and allows users to translate, rotate or scale the object with their hands until the object is released with the "release" command.
The "explode" keyword allows the user to observe the 3D model in details; the "show label" command shows a descriptive label of the objects in the screen, and the "change model" command lets the user to select other models.
Finally, the "time" command allows users to see what the object would look like in the past and future by moving their hands on the timeline. If the user needs help, the "help" keyword shows a list of voice commands.
Project Information URL: http://www.interlab.pcs.poli.usp.br/fusion4d/