Running AI on IoT microcontroller devices with ELL

Play Running AI on IoT microcontroller devices with ELL
Sign in to queue

Description

How about designing and deploying intelligent machine-learned models onto resource constrained platforms and small single-board computers, like Raspberry Pi, Arduino, and micro:bit? How interesting would that be?

This is exactly what the open source Embedded Learning Library (ELL) project is about. The deployed models run locally, without requiring a network connection and without relying on servers in the cloud. ELL is an early preview of the embedded AI and machine learning technologies developed at Microsoft Research.

Chris Lovett from Microsoft Research gives us a fantastic demo of the project in this episode of the IoT Show.

Get the ELL code on GitHub: https://github.com/microsoft/ell

Embed

Download

The Discussion

  • User profile image
    Anatoly

    Embedded, tiny, on Phyton :-) - very funny, in MS style.

  • User profile image
    ChrisLovett

    @Anatoly: the raspberry pi demo is using Python, just for convenience, but the audio keyword spotting demo is pure C++.

  • User profile image
    Anatoly

    @CrisLovett: thanks for the answer. I just meant that say use of Intel CPU with embedded neurons would be not bad idea.
    Sequentilal programming with Phyton or even C++ is, I'm afraid, in times slower than impementing FPGA or Neurochips, where
    thousands operations are performed in parallel.
    BTW. This channel9 works in such a manner that I must post twice...

  • User profile image
    ChrisLovett

    @Anatoly: oh, sure, many companies are working on hardware optimization of neural networks including Intel.  For large complicated vision models it makes sense to use special hardware including GPU, TPU, NPU, and FPGA, even custom ASICs.  ELL can target some of these hardware optimizations also, when it is provided by the LLVM back end (for example LLVM can already target Qualcomm Hexagon DSP chips).  

  • User profile image
    Hrudaya K

    Can Embedded Learning Library used for deploying intelligent machine-learned models onto bare metals like cortex-M or Cortex-A series ??

  • User profile image
    ChrisLovett
    Yep the answer is definitely yes, in fact the Keyword Spotting on MXCHIP demo does exactly that, deploying to ARM Cortex-M4. See https://github.com/IoTDevEnvExamples/DevKitKeywordSpotter/
  • User profile image
    Hrudaya K

    Is it possible to use GCC compiler on ELL rather than using LLVM for deploying machine-learned models onto cortex-M or Cortex-A series??If not what is the reason behind using LLVM??

  • User profile image
    ChrisLovett
    ELL chose LLVM because LLVM provides an open compiler framework that allows anyone to do "compiler like things" without having to write your own compiler from scratch. Specifically ELL uses the LLVM Intermediate Representation (IR) programming model to build programs and then emit them as code for a chosen target platform. See https://llvm.org/docs/ProgrammersManual.html for more information. I don't believe this kind of open compiler framework exists for GCC.
  • User profile image
    David G


    I have loaded and tested the keyword spotter sample. Works great! Now I would like to add IOT hub access and wifi. When I did that on the MX chip, I run our of room. I really only need two words. So, I am wondering if I should build my own dictionary, or go with a different approach. I am currently reading how to load ELL and use that, but wondering what is the best option. Input appreciated.

  • User profile image
    ChrisLovett
    Great question. I just posted some new keyword spotter models to our model gallery. So you could pick a smaller model, like this one /github.com/microsoft/ELL-models/tree/master/models/speech_commands_v0.01/Dulse. If that is still too big you can go with the LSTM50 model named Cinnamon. If you want to train your own model with a smaller set of keywords or different keywords you can follow this tutorial: /microsoft.github.io/ELL/tutorials/Training-audio-keyword-spotter-with-pytorch/. The output of that can then be compiled using 'compile.cmd' in the KeywordSpotter demo. Feel free to post any questions you run into along the way on the ELL GitHub issues list at https://github.com/microsoft/ELL.
  • User profile image
    David G

    Thanks Chris. I will take a look at those models.

    I really just need on and off for now, and with the i/f to the cloud.

    Also, don't want to hit a button to invoke the recorder, so looking at various implementations for a POC.

    thanks again.

Add Your 2 Cents