@lookemn:thanks for the feedback Matthew - we'll take on board that comment around the section which "didn't lead to anything" and have a bit of a think around what we might do there. There's a challenge for people like myself and Andrew to talk "future plans" as we don't work on product teams and usually do not have much visibility of what might come in the future but let us think on it and we'll see what we can come up with.
@s3curityConsult:Thanks for the feedback - our aim from the start was to make sure that we showed a lot of code and that we shared all of that code even though it's sample rather than "production" and we've stuck with that aim so it's good to hear that it's what you want.
Can you let us know which of the pieces that we talked about here you were interested in?
In the video, we showed some UWP APIs like SpeechRecognizer and SpeechSynthesizer and we also showed some RESTful APIs from Microsoft's Cognitive Services. We also showed some pieces on Android.
The RESTful services you can use from anywhere including WPF and 'Xamarin' code (i.e. I would expect you to be able to code this into a portable class library although recording audio might be a challenge that you have to overcome).
For the UWP APIs...In general, there are a lot of UWP APIs that you can use from 'the desktop' and MSDN has a guide to that here and some work has been done here to make that more accessible.
I'm not sure that UWP's SpeechRecognizer and SpeechSynthesizer are part of that list of APIs but then the .NET Framework already has speech APIs within System.Speech.Synthesis and System.Speech.Recognition so I'd expect that you can get equivalent functionality that way.
In terms of using UWP API's 'from Xamarin' - I'd expect that you would define an interface to abstract out the functionality that you want and then you'd implement it once for UWP, once for iOS and once for Android and as long as all 3 platforms have what you need in terms of speech then you would be good to go. That work may well already exist out there somewhere.
Great question! I'm not 100% sure on whether there's a better way of doing this as ink shows up in the StrokeContainer as a collection of InkStroke objects which can then be manipulated (e.g. deleted).
However, it is possible to get hold of the points that make up an InkStroke (via the GetInkPoints() method) but it gives you a read-only view.
That said, it's then possible to manually build InkStroke objects using the InkStrokeBuilder class so I'd say it's possible to;
Select the stroke that contains the points you don't want.
Get the points from it.
Build a new stroke containing only the points that you want.
Add that new stroke back into the StrokeContainer.
That's perhaps one way of achieving what you want here, there may be a better way that I haven't yet come across.
@hal9k2:Hi, what's being sent to Cognitive Services over the REST API is just a 'flat' photo and so the service is able to tell you who it thinks it sees in the photo but it can't know much about how that photo was taken. So...I suspect if you submitted a (quality) photo of a photo then the service would still perform recognition on it.
Thanks Ian - we will come back and talk about Cortana in a follow-on show but you can (to some extent) mix and match Cortana with speech recognition and synthesis. Generally, Cortana is about the Windows shell either launching your application with some parameters OR it can be about the Windows shell interacting with your application via calling your background tasks without your UI.
The speech bits that we look at here are about what happens INSIDE your app.
We'll look into the video problem ASAP but I've noticed that the HIGH quality is fine, the LOW quality is fine but the MEDIUM quality is missing the last 5 minutes of video or so.
I need to flag that there's a 'bug' in this video :) I goofed on the day and forgot to include a piece of code that's really quite critical.
At 12 minutes 43 seconds into this video I say "All you have to do is make a runtime check to make sure that it's ok to use the APIs that you want on a particular device family".
At around 28 minutes into the video, I forget to actually do that. When I'm writing a function called DoSomeGpioWork() I should actually be wrapping all of that code in an IF statement that checks that it's ok to use the GPIO APIs. I could do that by checking at the API level or the Contract level and I'd usually use the contract level with some code like;
// Do GPIO Work
I got away with this in the talk but it's important to flag that it was my mistake on the day and so I thought that I'd add it here to make sure it was clear.