@lookemn:thanks for the feedback Matthew - we'll take on board that comment around the section which "didn't lead to anything" and have a bit of a think around what we might do there. There's a challenge for people like myself and Andrew to talk "future plans" as we don't work on product teams and usually do not have much visibility of what might come in the future but let us think on it and we'll see what we can come up with.
Mike Taulty works in the Developer and Platform Group at Microsoft in the UK.
@s3curityConsult:Thanks for the feedback - our aim from the start was to make sure that we showed a lot of code and that we shared all of that code even though it's sample rather than "production" and we've stuck with that aim so it's good to hear that it's what you want.
Keep watching and keep giving us feedback :)
Can you let us know which of the pieces that we talked about here you were interested in?
In the video, we showed some UWP APIs like SpeechRecognizer and SpeechSynthesizer and we also showed some RESTful APIs from Microsoft's Cognitive Services. We also showed some pieces on Android.
The RESTful services you can use from anywhere including WPF and 'Xamarin' code (i.e. I would expect you to be able to code this into a portable class library although recording audio might be a challenge that you have to overcome).
I'm not sure that UWP's SpeechRecognizer and SpeechSynthesizer are part of that list of APIs but then the .NET Framework already has speech APIs within System.Speech.Synthesis and System.Speech.Recognition so I'd expect that you can get equivalent functionality that way.
In terms of using UWP API's 'from Xamarin' - I'd expect that you would define an interface to abstract out the functionality that you want and then you'd implement it once for UWP, once for iOS and once for Android and as long as all 3 platforms have what you need in terms of speech then you would be good to go. That work may well already exist out there somewhere.
Great question! I'm not 100% sure on whether there's a better way of doing this as ink shows up in the StrokeContainer as a collection of InkStroke objects which can then be manipulated (e.g. deleted).
However, it is possible to get hold of the points that make up an InkStroke (via the GetInkPoints() method) but it gives you a read-only view.
That said, it's then possible to manually build InkStroke objects using the InkStrokeBuilder class so I'd say it's possible to;
- Select the stroke that contains the points you don't want.
- Delete it.
- Get the points from it.
- Build a new stroke containing only the points that you want.
- Add that new stroke back into the StrokeContainer.
That's perhaps one way of achieving what you want here, there may be a better way that I haven't yet come across.
@hal9k2:Hi, what's being sent to Cognitive Services over the REST API is just a 'flat' photo and so the service is able to tell you who it thinks it sees in the photo but it can't know much about how that photo was taken. So...I suspect if you submitted a (quality) photo of a photo then the service would still perform recognition on it.
BTW - glad we made you laugh :)
Thanks Ian - we will come back and talk about Cortana in a follow-on show but you can (to some extent) mix and match Cortana with speech recognition and synthesis. Generally, Cortana is about the Windows shell either launching your application with some parameters OR it can be about the Windows shell interacting with your application via calling your background tasks without your UI.
The speech bits that we look at here are about what happens INSIDE your app.
We'll look into the video problem ASAP but I've noticed that the HIGH quality is fine, the LOW quality is fine but the MEDIUM quality is missing the last 5 minutes of video or so.