Learn how to use audio from the Vision AI DevKit as input data for IoT solutions. (aka.ms/iotshow/audiooniotedge) Using AI on the edge, you can monitor the sound of a bearing on a motor, the sound of RPMs of a vehicle, or the sound of suction of a vacuum pump and detect failures when a sound changes. In this episode, Kevin Saye, Senior Technical Specialist, outlines five steps for processing audio with the Vision AI Developer Kit. First, you'll collect audio samples of a device (in this case, a water fountain) when it's working. Secondly, you'll label them. Thirdly, you'll build a neural network for processing the audio files. Next, you'll build the Azure IoT Edge models for inferencing the audio. Finally, you'll deploy the modules to Azure IoT Edge.
Check out this sample solution for processing audio with a Vision AI DevKit: aka.ms/iotshow/audiooniotedge