Kinect for life (and a little Robosavvy)
Today's partially inspirational and partially something you can download and play with post is in an area that you all know is near and dear to my heart, Kinect in the STEM world (that and robots ...
Kinect for life
This week we had an event showcasing medical technology innovation in partnership with Kingston University, the University of Surrey, Brunel University and Microsoft.
Given the revolutionary advances made possible with Microsoft’s Kinect for Windows, Medical professionals and researchers are exploring how computer vision and natural user interfaces can enhance healthcare.
- Fall detection system, Dr Dimitrios Makris, Kingston University
- Facial expression recognition from 3D data, Dr Hongying Meng, Brunel University
- Controlling a smart home, Dr Francisco Florez Revuelta, Kingston University
- Concept to Commercialisation – A strategy for business innovation 2011- 2015– Graham Worsley, Lead Technologist in the Assisted Living Innovation Platform, Technology Strategy Board
- Kinect for Medical and Non-gaming applications: developments at the University of Surrey - Dr Kevin Wells, University of Surrey
- Graham Worsley, Technology Strategy Board
- Prof Malcolm Sperrin, Royal Berkshire Hospital
- Dr Dimitrios Makris, Kingston University
- Dave Brown, Microsoft
- Prof Paolo Remagnino, Kingston University
- Dr Kevin Wells, University of Surrey
- Dr Hongying Meng, Brunel University
- Dr Francisco Florez Revuelta, Kingston University
- Tim Craig, Smart Care UK
Kinect use examples
This is a preview into a technology that will be used in the future for example to perform remote surgery or to send robots to work in dangerous areas.
There is currently no standard regarding the control of Robots.
Most small robots, use a low power microcontroller similar to Arduino. This is something like a computer, but much less powerful, however suitable to communicate with sensors, control motors, recharge batteries, etc.
There is a huge variety of these microcontrollers, and even when the more usual types are used, the robot manufacturers usually create their own software to operate the robots.
This causes situations where for example when a program is designed for a given robot, it needs to be completely rewritten if another brand of robot is to be used, even if they are nearly identical at an hardware level.
This where Microsoft Robotics Studio steps in and closes that gap. Robot manufacturers, or the users themselves can design small software modules for each robot, that act as a translator between MRDS and the control system for the robots.
This means that in MSRDS a command, for example to make a humanoid robot step forward, is identical across several brands of robots.
With MSRDS, Robots can now talk to each other, talk to sensors from different manufacturers or be supervised by our own master process (hypervisor) that makes sure everything is working as expected. MSRDS also enables interoperability with complex functionalities hosted on the PC such as Speech Recognition.
Another advantage of MRDS is that it is a tool accessible to wide range of users, regardless of their expertise. Beginners can build Robot behaviours using Visual Programming Language and Advanced users can work with textual programming (any .Net language) to make the most out of MSRDS.
This conversation has been locked by the site admins. No new comments can be made.