The Micro AI Vision Sensor is compatible with the eIQ Toolkit from NXP and can be used to run computer vision machine learning models in real time. The eIQ Toolkit allows you to train models from existing datasets and easily adjust them to your needs, before transferring them to the Micro Vision Sensor for deployment. The eIQ Toolkit and documentation for in depth usage can be downloaded from https://www.nxp.com/eiq. You will need the toolkit installed for these next steps. This guide will cover some of the basics, but it is recommended to view the eIQ Toolkit documentation for more in-depth usage.
- To train a model in eIQ Portal, select the CREATE PROJECT button and give the new project a name.
- Use the IMPORT button to add images of different objects you would like your model to classify. You can also use the CAPTURE button to use the local camera, or the REMOTE button to use the Micro AI Vision Sensor Starter Kit if it’s been added to the dropdown. This can be done from the REMOTE DEVICES button on the toolbar on top of the window:
- When images are imported, you can click on them to add labels for objects in the image. To add a regional label, click and drag on the image to determine the bounding box of the label. To add a full image label to the image, click the + button on the right side.
- Once all data is labeled, you can click the SHUFFLE button to shuffle the data into testing and training sets. Click SELECT MODEL in the bottom left to move onto the next step.
- The model selection page allows you to design a custom model or augment an existing model. The BASE MODELS button contains several pre-existing models that can be trained on your data to fit your needs. We will be using the base model Mobilenet_v3. If you instead want to develop a model from scratch, you can select the options on the main screen instead. For more information, please see the eIQ documentation.
- Now, on the training screen we will be able to train the model. You can increase the number of epochs to train the model with the left slider. Press the START TRAINING button and eIQ Portal will begin to train the model, updating the graph with accuracy as it goes. Train for as long as you like, and when done select the VALIDATE button.
- On the Validation page you can test the accuracy of the model against the testing dataset if you would like by hitting the VALIDATE button. This is not mandatory, however. For more information on Validation, see the eIQ documentation and the following section on using the IoT Vision Starter Kit – Micro to perform validation. To export and save the model, press the DEPLOY button. You can then save the model wherever you like using the EXPORT MODEL button:
Remote Validation and Uploading Model through eIQ Portal
- The IoT Vision Starter Kit – Micro can be connected to the eIQ Portal software over the network to validate models. This can be used to evaluate the performance and accuracy of a model on the Vision Sensor before deployment. The eIQ Toolkit can be obtained from https://www.nxp.com/eiq including more documentation on core features like training a model.
- To use the IoT Vision Starter Kit – Micro for remote validation with eIQ Portal, first disable the modelrunner by using the “modelrunner disable” command on the IoT Vision Starter Kit – Micro. This step is not mandatory but will greatly speed up the model upload and processing speed. The modelrunner is used for real time predictions with a model and is not required for validation (See “Uploading a Model” above):
- Launch eIQ Portal, and train a model or load a previously trained model:
- Enter the Validation Workspace and select the “Validation Target” Button. Input the Micro AI Vision Sensor IP address and hit OK:
- The target will appear on the sidebar. Click the yellow Validate button and see the results:
Softmax Threshold is set to 0.5.
Softmax Threshold is set to 0.
Softmax Threshold is set to 1.
Softmax Threshold is set to 0.58
From the above images it is clear that as the softmax threshold changes the accuracy of the model is also affected. So, to get the best possible result in our cards demo the softmax has been set to 0.58.
This confusion matrix visually illustrates the accuracy of the model. The intensity of the colour illustrates the number of predictions in each cell. Cells on the diagonal represent correct predictions and are in green. All other values are in red as they are incorrect predictions. Cells can be clicked to see the images, their predictions, and the confidence of the model for that prediction. Increasing the “Softmax Threshold” slider on the right will set a minimum required confidence for a prediction to be counted. Predictions that do not meet this threshold will be moved to the “background” category. For more information on this screen, please see the eIQ documentation at https://www.nxp.com/eiq.
Uploading a model:
The Micro AI Vision Starter Kit can be used to run classification and detection computer vision models in Realtime, allowing it to label objects the Vision Sensor detects. To do this, first obtain a model, either by downloading an existing model or training one using eIQ Portal. Make a note of where this model was stored, for example C:\NXP\mobilenet_v3_large.rtm
Upload using the Web UI:
- Uploading a model using the web page
-
- Select a rtm file to upload and press enter
- The model takes a few seconds to upload and then few more seconds to apply the model to the micro
- Once the upload window is gone the model is uploaded
-
Upload using eIQ Command Line Python Script
- From the eIQ Portal start screen, use the COMMAND LINE button to open a terminal to the install directory.
From there, navigate to the python folder. On Windows, this command would be cd python:
Run the modelclient tool by using the following command in this folder:
python -m deepview.rt.modelclient <Vision Sensor IP Address> -m <model location>
For example:
python -m deepview.rt.modelclient http://192.168.1.90 -m C:\NXP\mobilenet_v3_large.rtm
- The modelclient will begin to upload the model. Wait for it to upload 100% to the camera.
- After the model is uploaded, open a terminal to the IoT Vision Starter Kit – Micro and reboot the Vision Sensor using the reset
- The IoT Vision Starter Kit – Micro should start printing frame captures to the console. If not, run the modelrunner enable command:
- As before, copy the Ip address of the IoT Vision Starter Kit – Micro to the web browser and see the output of the camera
You should now have a real time view of the Vision Sensor feed. If you put objects in front of the camera, it will try to detect and classify them based on the model uploaded.
Comments
0 comments
Please sign in to leave a comment.