Azure IoT Workshop | Quality Assurance

End-to-end IoT workshop focusing on a quality assurance scenario powered by computer vision and AI.

View on GitHub

Training and deploying a custom AI model for detecting visual anomalies

Fabrikam wants to improve the efficiency of their soda can manufacturing plant. They would like to be able to identify and eliminate soda cans that fell down on their production lines in order to avoid slow downs.

In the section below we will be re-configuring the application already running on our Jetson Nano so that it runs a computer vision model that makes sense for Fabrikam’s business. We will collect sample images from the production lines (some with the cans in the proper upright position, some with misplaced cans) and build a custom AI model able to determine whether cans are upright or not. We will then deploy this custom AI model to DeepStream using IoT Central. There are many ways to train a computer vision model, but one can get an accurate model very quickly, without writing a single line of code, using the online Custom Vision service. That’s what you will be using in this section of the workshop.

Learning goals

Steps

Creating a new Custom Vision project

Let’s start by creating a new Custom Vision project in your Azure subscription. This will allow us to upload and tag the images we want to use for training our model, and to actually train the model using computing resources in the cloud.

Capturing a training dataset

We then need to collect images to build a custom AI model. In a real-life scenario, you would capture a (large) series of images from the cameras that are setup in the production line in order to establish your training dataset. However, in the context of this workshop and in the interest of time, here is a set of images that have already been captured for you that you can directly upload to Custom Vision.

Labelling a training dataset

Now that all the training images have been imported in the Custom Vision project, it is time to label them, i.e. to flag which area(s) of each image correspond to what visual feature we’re interesting in identifying.

Custom Vision Custom Vision
A can to be labelled as “Up” A can to be labelled as “Down”

In the interest of time, you can later refer to this pre-built Custom Vision model.

Deploying a trained model

Finally, we’ll deploy this custom vision model to the Jetson Nano using IoT Central. In IoT Central:

After a few moments, the deepstream module should restart. Once it is in Running state again, look at the output RTSP stream via VLC (Media > Open Network Stream > paste the RTSP Video URL that you got from the IoT Central’s Device tab > Play).

We are now visualizing the processing of 3 real time (e.g. 30fps 1080p) video feeds with a custom vision AI models that we built in minutes to detect visual anomalies!

Custom Vision

Going further

You can take a few minutes to train an entirely different model using customvision.ai, and deploy it on the fly to your running

Wrap-up and Next steps

In the next section we will implement the final remaining step to provide Fabrikam with a minimum viable product: the ability to trigger custom rules and alerts when incidents on the production lines are detected.