Putting the you in the controller with the Kinect and iController
- Posted: Apr 24, 2012 at 6:00 AM
- 6,067 Views
- 10 Comments
Loading User Information from Channel 9
Something went wrong getting user information from Channel 9
Loading User Information from MSDN
Something went wrong getting user information from MSDN
Loading Visual Studio Achievements
Something went wrong getting the Visual Studio Achievements
Today's free for educational and evaluation purposes download is one that I see asked about allot, using the Kinect to execute or emulate keyboard/mouse commands.
While there are many open source projects, hacks and even commercial products that work on the Microsoft Kinect sensor, our experience with them has been suboptimal. One of the key reasons is that even though they do perform designated functions, however the consideration on human factors is primarily missing.
Here are some examples –
- Long waiting time for user to use the interface results in tiring arms
- Limitation of the image resolution for detection makes it frustrating for actions to be performed repeatedly in order to have a successful “hit”
- An action such as requiring motion to follow a certain path, e.g. depth, is difficult to achieve because the way the human skeletal structure is constructed
- Complicated gestures and unintuitive user interface by which the user has no idea where and how to start
The iController Project is designed to overcome all the above shortcomings. The philosophy behind this project founded on the following –
- Very simple-to-use and easy-to-understand interface
- Simple movements to perform many common tasks
- Fully configurable interface (commercial edition only) for quick adaptation to different projects
The iController software uses the Microsoft Kinect sensor hardware as the device to capture inputs to the computer. The primary information captured is motion sequences that are translated into mouse and keyboard controls. As a generic controller, the iController is abstracted from the operating system and any application so that its functionality may be defined specifically by any application for different purposes.
The iController’s main triggering mechanism is through landing a detected motion pointer (the cursor) onto pre-defined “touch” sensors that appear on the screen. There are no gestures required to activate functions.
Project Information URL: http://www.xpegia.com/phoenix/index.php/download.html?fileid=2
Project Download URL: http://www.xpegia.com/phoenix/index.php/download.html?fileid=2 (Registration required)