Dynamic Grammar and the Kinect
As I've said before, I feel that speech is the killer Kinect feature. It makes some things just so much easier and less disruptive. I've quickly found the speech feature of my Xbox One to be one that's hard to live without. During the holidays I accidently tried to control my daughter's TV via a voice command... Doh!
Today's project from Abhijit Jana talks about how you can build your Kinect app to handle different speech commands, dynamically.
Microsoft Speech API provides the
SpeechRecognitionEngineclass, which works as a backbone for the speech-enabled application. With the Kinect SDK as well, the
SpeechRecognitionEngineclass accepts an audio stream from the Kinect sensor and processes it. The Recognition engines, requires set of grammars ( set of commands), that tell the recognizer what to match. You can build the grammars either using
ChoicesClass or using XML documentation approach.
Once you are done with defining the grammar, you need to load the grammar to the speech recognizer as follows:
To build an extensive speech enable application you need multiple set of commands, that enables you to need to multiple grammar module to load inside the speech recognizer.