Build with an Azure free account. Get USD200 credit for 30 days and 12 months of free services.

Start free today

Train with Azure ML and deploy everywhere with ONNX Runtime

Play Train with Azure ML and deploy everywhere with ONNX Runtime
Sign in to queue

Description

You can now train machine learning models with Azure ML once and deploy them in the Cloud (AKS/ACI) and on the edge (Azure IoT Edge) seamlessly thanks to ONNX Runtime inference engine. 

In this new episode of the IoT Show we introduce the ONNX Runtime, the Microsoft built inference engine for ONNX models - its cross platform, cross training frameworks and op-par or better performance than existing inference engines.
We will show how to train and containerize a machine learning model using Azure Machine Learning then deploy the trained model to a container service in the cloud and to an Azure IoT Edge device with IoT Edge across different HW platform – Intel, NVIDIA and Qualcomm.

aka.ms/IoTShow/ONNXruntime

Embed

Download

The Discussion

  • User profile image
    Juan Suero

    looking for the VSCode project that was shown in the video cannot find it. will this be available soon. thanks. great work!

  • User profile image
    manashgoswa​mi
    Repo for this reference implementation is here: https://github.com/Azure-Samples/onnxruntime-iot-edge
  • User profile image
    Alan

    I would like to know the differences between Onnx Runtime and ML.Net ? In which use case would you one or the other ? Thank you.

  • User profile image
    manashgoswa​mi
    ONNX Runtime is supported in ML.NET through the ONNX Transformer. ONNX Runtime is strictly for inferencing, while ML.NET provides many other capabilities, including data prep and training.
    ML.Net supports inferencing both TF and ONNX models and we have added DNN training APIs as well.
  • User profile image
    MarkTab

    ONNX Runtime is supported in ML.NET through the ONNX Transformer. ONNX Runtime is strictly for inferencing, while ML.NET provides many other capabilities, including data prep and training.
    ML.Net supports inferencing both TF and ONNX models and we have added DNN training APIs as well.


    Agreed: and "inferencing" means functionally the same as "scoring" or "estimating". The application for light or heavy edge scoring with ONNX is a good architectural example.

Add Your 2 Cents