Episode

Train with Azure ML and deploy everywhere with ONNX Runtime

with Olivier Bloch

You can now train machine learning models with Azure ML once and deploy them in the Cloud (AKS/ACI) and on the edge (Azure IoT Edge) seamlessly thanks to ONNX Runtime inference engine. 

In this new episode of the IoT Show we introduce the ONNX Runtime, the Microsoft built inference engine for ONNX models - its cross platform, cross training frameworks and op-par or better performance than existing inference engines.
We will show how to train and containerize a machine learning model using Azure Machine Learning then deploy the trained model to a container service in the cloud and to an Azure IoT Edge device with IoT Edge across different HW platform – Intel, NVIDIA and Qualcomm.

aka.ms/IoTShow/ONNXruntime