This article will provide a walkthrough on using Deep View-Validator for evaluating the performance of Hailo models that have been generated, through Hailo QuickStart Guide or Hailo Conversion. It is recommended to look through the article Deep View Validator which provides more details for using the application. This article will focus only on the validation of Hailo models and a brief overview of the arguments used and the results generated. The model used for demonstration purposes will be the YOLOv7 COCO model generated through the QuickStart Guide.
Installation
Currently on Maivin it is necessary to run commands with sudo to ensure that the Hailo accelerator can be accessed. To this end, we must also setup our python environment on our Maivin board to use the sudo environment. To install Deep View-Validator with Hailo dependencies, run the following command.
$ sudo pip3 install deepview-validator[hailo]
This will install all necessary packages for running validation.
Running Validation
At this point you will be able to run the validator using the following command.
$ sudo python3 -m deepview.validator \
--validate detection --dataset coco128 --labels labels.txt \
--detection_score 0.25 --detection_iou 0.5 --validation_iou 0.5 \
--visualize coco128_hailo yolov7-edgefirst-coco.hef
The command line parameters are described below.
--validate detection declares that the purpose of the model is for detection and runs detection validation.
--dataset refers to the folder that contains the validation set. The format of this set is described in Deep View Validator Supported Dataset Formats.
--labels is a path to the labels file for the particular dataset and model. Integer indices from the dataset and the model will be converted to their string representation using the contents in this file.
--detection_score and --detection_iou sets the score and IoU threshold for the NMS performed on the model.
--validation_iou determines how closely a detected box must match to the ground truth provided in the validation set to count as a true positive.
--visualize is given a folder into which all of the results will be stored which can be useful for visual inspection of the images and graphs of the metrics determined. This will create the path if it doesn't exist. Examples further below.
The last argument is a positional argument which specifies the path of the model file.
Note: that normalization is not provided, this is due to normalization being baked into the model during conversion, see Hailo Conversion.
Reading the Output
Upon validation completion of the model a report will be generated that will look like the following.
Validation Output. [More details]
This provides information on the total number of true positives (correct detections), false negatives (missed detections) and false positives (incorrect detections) as well as other useful metrics and timing information (found at the bottom of the report) to evaluate the model performance. For a more detailed description on the classifications of the model detections, please refer to the Deep View Validator Detection Classifications and Deep View Validator Matching and Classification Rules. For a deeper understanding of the metrics of Accuracy, Precision, and Recall, please refer to the Deep View Validator Object Detection Metrics page.
This can then be used in conjunction with the visualization of the results provided in the folder given to the --visualize argument which can give insights into which objects exist in the validation set that are not being detected by the model (false negatives), objects that are being detected correctly (true positives) or incorrectly (false positives).
coco128; 000000000532.png
As described in Deep View Validator Detection Classifications, this image shows in green are the true positives, in blue are the ground truth objects which could also imply false negatives if not paired with a model detection, and in red which are false positives. The text annotations are in the format of "object label score IoU".
Comments
0 comments
Please sign in to leave a comment.