This QuickStart Guide will walk a new user through the process of converting an ONNX model to run Maivin2 with Hailo AI Accelerator.
There are four main components that exist within this QuickStart guide, Model Training and Setup, Model Conversion, and Model Inference and Validation. We will be discussing each of these briefly within this guide, but if you are looking for more detailed information on each step, please see the following articles for model training or at Further Reading for explanations of Conversion, Inference and Validation.
All downloadable links can be found at the bottom of the page or within their respective topic page.
Requirements
- Local Linux PC running Ubuntu 22.04
- Maivin 2 with Hailo AI Accelerator installed
Base Models
The base model used for this example will be a modified YoloV7-tiny model from this repo (ONNX model at bottom of page). This model differs from the original YoloV7-tiny model by a difference in the activation function for each convolution block. Instead of using LeakyRelu, the model uses ReLU6, which has been found to perform more stably after quantization of the model. The ONNX model trained on COCO has been provided in the Downloads section of this page. For further information of model development and training YoloV7-tiny using your own dataset please visit this page. Alternative models are supported, but will not be discussed in this QuickStart.
COCO Dataset
The dataset that was used for training can be obtained by running the shell script get_coco.sh, provided in the Downloads. This script is found within the YoloV7 repo mentioned previously, but can be used standalone to download the COCO dataset. This will work in a Linux environment or in WSL. The commands to run can be either:
$ sh get_coco.sh
or:
$ chmod +x get_coco.sh
$ ./get_coco.sh
This should create a sub-directory called "coco" that contains annotations and images.
Model Conversion
Conversion is performed through a docker container that we provide at Docker Hub. The command that is used to generate the HEF model provided on this page is as follows on Linux:
$ sudo docker run -it --rm -v <directory>:/work <--gpus all> \
deepview/converter:hailo-3.26.0 --samples /work/<samples_folder> \
--num_samples 1024 --quant_normalization unsigned \
--output_names /model.77/m.0/Conv,/model.77/m.1/Conv,/model.77/m.2/Conv \
--default_shape 1,640,640,3 --quantize --include_nms \
/work/yolov7-edgefirst-coco.onnx /work/yolov7-edgefirst-coco.hef
The <directory> should contain both the tiny ONNX model and the COCO subdirectory. If this is current directory in Linux, this is contained in the $PWD or $CWD environment variable, depending on system.
-v $PWD:/work
In order for the model to properly quantize it is highly recommended that a GPU is available to the docker container. This is handled by the argument provided:
--gpus all
On Linux machines, it is required that nvidia-docker be installed and the base command will be changed to
nvidia-docker
This conversion can take 35-40 minutes using a GeForce RTX 4060 and can take longer on lower powered GPU's. Additionally, within your work directory, you will need a samples folder that contains at least 1024 images. If fewer images are provided or num_samples is reduced, the conversion may not produce a model that provides the same accuracy as the model provided in this QuickStart. This can be done by setting the samples folder to the COCO 2017 training images downloaded above.
--samples /work/coco/images/train2017
For a complete breakdown of the arguments provided and their purpose, please visit the Hailo Conversion page.
If you receive an error during conversion, regarding your GPU being out of memory, please add the following argument to the above command
--batch_size 1
This will increase the time for conversion, but should avoid the GPU being overloaded.
This conversion will generate the HEF model within your current working directory to be used in the following inference and validation steps.
[info] Building HEF...
[info] Successful Compilation (compilation time: 7s)
[info] Saved HAR to: /work/yolov7-edgefirst-coco.hef.har
Successfully exported model: /work/yolov7-edgefirst-coco.hef
Model Inference
Provided below is a hailo_detect.py script and test image that can be run on the Maivin2 board. Upload the Python file, test image, and labels text file below, as well as the HEF file from the previous step, to the Maivin2. You will need to install a couple packages before running the script:
$ sudo pip3 install numpy zenlog pillow
Then, run the script using the following command.
$ sudo python3 hailo_detect.py -t 0.3 -u 0.25 --save_image --labels labels.txt \
yolov7-edgefirst-coco.hef test_image_coco.jpg
This will print out the resultant boxes from the inference of the model as well as with the --save_image flag a new image will be saved with the boxes properly overlaying the test image.
[box] label (scr%): xmin ymin xmax ymax [ load infer boxes]
test_image_coco.jpg [ 104.15 46.57 0.84]
[ 0] person ( 88%): 0.20 0.06 0.42 1.00
[ 1] person ( 84%): 0.57 0.04 0.88 1.01
[ 2] person ( 83%): -0.00 0.06 0.29 1.00
[ 3] person ( 47%): 0.17 0.18 0.23 0.30
[ 4] person ( 46%): 0.54 0.16 0.62 0.55
[ 5] umbrella ( 47%): 0.43 0.15 0.57 0.21
[ 6] wine glass ( 78%): 0.57 0.52 0.61 0.67
[ 7] wine glass ( 65%): 0.23 0.37 0.30 0.60
[ 8] wine glass ( 60%): 0.37 0.37 0.41 0.48
[ 9] dining table ( 37%): 0.33 0.55 0.74 0.90
The resulting image (test_image_coco_boxes.jpg) should look like below.
For an in-depth explanation of the Hailo inference framework within hailo_detect.py and additional information, please refer to the Hailo Inference page.
Model Validation
Within our Deep View Validator, we have support for running validation on a dataset using a Hailo model. Provided on the Hailo Validation page are the Hailo-specific validator Python package and the COCO 128-image dataset for testing validation. Both of these need to be uploaded to the Maivin2.
Validating the tiny ONNX model can be performed with the following setup and execution commands:
$ sudo pip3 install deepview-validator[onnx]
$ unzip coco128.zip
$ sudo python3 -m deepview.validator \
--validate detection --dataset coco128 --labels labels.txt \
--detection_score 0.25 --detection_iou 0.5 --validation_iou 0.5 \
--visualize coco128_hailo --norm unsigned yolov7-edgefirst-coco.onnx
Validating the Hailo model can be performed with the following setup and execution commands:
$ sudo pip3 install deepview-validator[hailo]
$ unzip coco128.zip
$ sudo python3 -m deepview.validator \
--validate detection --dataset coco128 --labels labels.txt \
--detection_score 0.25 --detection_iou 0.5 --validation_iou 0.5 \
--visualize coco128_hailo yolov7-edgefirst-coco.hef
This will provide you with the accuracy found by the model as well as other insights into its performance.
For further information and the files used in this demonstration of validation of Hailo models, please refer to the Hailo Validation page. To understand format required for the dataset by validator, please refer to the main Deep View Validator article.
Further Reading
There are further insights that can gleaned from the model using the built-in Hailo tools, such as things like profiling and noise analysis, which can provide a better understanding of numerical performance of the model as well as which layers may not be performing the same after quantization. These are further explored in this article Hailo Tools.
Downloads
To download the following ONNX and HEF models, please submit a ticket with the subject line "Access to the YOLOv7 Tiny Relu6 GitHub" and your GitHub username in the description.
Comments
0 comments
Please sign in to leave a comment.