Introduction
This package contains the stand-alone validator capable of using VAAL, ModelClient, and TensorFlow library to validate DeepViewRT and Keras (designated with the h5 extension) models. Validation datasets should be in Darknet or TFRecord format. Validation results can be posted into Tensorboard or saved in the local machine with the validation metrics printed on the console.
For more information on the dataset and annotation format, visit the following links.
- TFRecord format for ModelPack – Au-Zone Technologies (deepviewml.com)
- Darknet Ground Truth Annotations Schema – Au-Zone Technologies (deepviewml.com)
Requirements
- iMX 8M Plus EVK running NXP Yocto BSP 5.10.72 or newer.
- VisionPack (Middleware)
- Edge Validation Tool Setup
Usage
The following command provide a template to run DeepViewRT and Keras models on either a Darknet or a TFRecord dataset.
Below is a sample command:
python3 -m deepview.vaal.validator "path to the model" -d "path to the dataset"
- For TFRecord datasets, "path to the dataset" can be a path that points to a directory that contains *.tfrecord files or a path that points to a YAML file with the following structure:
classes:
- class_1
- class_2
- class_3
validation:
path: "path to the tfrecord files"
- For Darknet datasets, "path to the dataset" can be a path that points to a directory that contains images (jpg, png, jpeg, JPG, PNG, JPEG) and annotations text files. It can also be a path to a directory with the following structure:
Directory Path
|---------images
|---validate
|------- image.jpg ....
|---------labels
|---validate
|------- image.txt ....
|---------labels.txt (optional)
Finally, it can also be a path to a YAML file with the following structure:
classes:
- class_1
- class_2
- class_3
type: darknet
validation:
annotations: "path to the annotation text files"
images: "path to the image files"
To see more available command line options to specify model or dataset parameters, run the command:
python3 -m deepview.vaal.validator --help
Bounding Box Validations
As mentioned, the images with visualizations and the metrics can either be posted onto tensorboard or saved in the local machine.
Specifying the parameter "--tensorboard <path to save>" will save the tensorboard logs under the specified path.
To view the results in tensorboard, run the following command.
tensorboard --logdir="path to the tensorboard logs"
As a reference, the output should look like the following.
However, specifying the parameter "--visualize <path to save>" will save the images with visualizations under the specified path and will print the metrics on the console.
Plots
As part of detection validation, the following charts and plots are produced.
- Class Metric Histogram
The histogram on the left compares the precision, recall, and accuracy of each class evaluated at validation IoU threshold of 0.5. The histogram on the right shows the number of true positives, false positives, and false negatives of each class evaluated at validation IoU threshold of 0.5.
- Precision-Recall Curve
This plot shows the tradeoff between precision and recall where at higher confidence thresholds we have a higher precision because it limits the number of false positives by only considering predictions with high confidence, but this increases the number of false negatives, thus lower recall. As the confidence threshold decreases, the number of false positives increases and the number of false negatives decreases, so we can see a decrease in precision, but an increase in recall.
- F1 Curve
The F1 metric is used to combine both precision and recall under a single metric that measures the model's overall performance. The standard calculation of F1 score = (2 x precision x recall) / (precision + recall).
- Precision-Confidence Curve
This plot shows how precision changes at increasing confidence thresholds. It is expected for precision to increase as you increase the confidence threshold because at higher confidence, it limits the number of false positives.
- Recall-Confidence Curve
This plot shows the behavior of recall as you increase the confidence threshold. At lower confidence threshold, there are more predictions, thus lower false negatives, resulting in a high recall score. The number of predictions decrease as you increase the confidence threshold, thus resulting in a lower recall because the number of false negatives increases.
Semantic Segmentation Validations
Semantic Segmentation Models are currently DeepViewRT models and are either classified as ModelPack or DeepLab and can only be validated with Darknet formatted datasets in the PC.
These models require model runner to be running in the background in a target device such as an EVK.
Below is a sample command to run model runner in the EVK:
modelrunner -c 1 -H "Port Number"
Validation requires the path to the model and the dataset and the target to the model runner.
Below is a sample command of running ModelPack:
python3 -m deepview.vaal.validator "path to the model" -d "path to the dataset" --validate segmentation --target "EVK IP Address":"Port Number"
Below is a sample command of running DeepLab:
python3 -m deepview.vaal.validator "path to the model" -d "path to the dataset" --validate segmentation --model_segmentation_type deeplab --target "EVK IP Address":"Port Number"
Below is a sample output of ModelPack:
Note: The metrics can also be posted to Tensorboard.
Semantic Segmentation is classifying each pixel in the image as either true positive, false positive, or false negative which explains tremendous values in the validation report. There is no calculation of IoU in each prediction because that only applies to bounding boxes.
Conclusion
This article has shown how to run the edge validation tool to validate DeepViewRT and Keras models.
To learn more about how model predictions are classified as true positives, false positives, and false negatives, visit this article.
Comments
0 comments
Please sign in to leave a comment.