Introduction
In this article we will introduce our validation tool for evaluating model performance and accuracy. The validation tool is packaged as a docker container as deepview/validator. This container will help you to validate models and extract different metrics from the validation set. Also, it is possible to overlay boxes and visualize them by using ModelPack logging System. Our validation SDK is capable of validating models in native Keras format and also quantized models on remote targets via SSH. For the case of remote targets VisionPack is required and is already installed on the system with the detect app.
Evaluating a Keras model
To evaluate ModelPack in the native Keras format could be performed as follow:
docker run -it
(1) -v /data/outputs/checkpoints/<time-stamp>:/models
(2) -v /data/datasets/tfrecord:/data_out
(3) deepview/validator:latest
(4) /models/last.h5
(5) --dataset=/data_out/dataset_info.yaml
(6) --tensorboard=/models/logs
- Mount the checkpoints folder in /models location within the docker container
- Mount the dataset location into /data_out. Read Dataset Export Guide to know about the content of /data_out
- Container's name
- Path to the model we want to evaluate, relative to the docker container
- path to the dataset, relative to the docker container
- path to the TensorBoard logs, relative to the docker container
The command above will show you in the terminal the progress of the validation. Also, once the program ends, it will print on the terminal where the logs for TensorBoard are. The path is formed as the following: /models/logs/<time-stamp>
Evaluate Converted and Quantized Model
Before evaluating a quantized and converted model it is recommended to read the Edge Validation Guidelines first. By using the same container we can now run validation on a remote target.
docker run -it
(1) -v /data/outputs/checkpoints/<time-stamp>:/models
(2) -v /data/datasets/tfrecord:/data_out
(3) deepview/validator:latest
(4) /models/last_uint8_int8_tensor.rtm
(5) --dataset /data_out/dataset_info.yaml
(6) --tensorboard=/models/logs
(7) --norm raw
(8) --target=10.10.40.182
(9) --upload_model
(10) --upload_dataset
(11) --dataset_image_extension=png
- Mount the checkpoints folder in /models location within the docker container
- Mount the dataset location into /data_out. Read Dataset Export Guide to know about the content of /data_out
- Container's name
- Path to the model we want to evaluate, relative to the docker container
- path to the dataset, relative to the docker container
- path to the TensorBoard logs, relative to the docker container
- Normalization function. Since our model is expecting uint8 input, there is no need to apply any quantization.
- remote target IP address.
- By adding this parameter we allow our command upload the model to the target. There is no need to upload the model each time we want to evaluate the model with different parameters.
- This parameter uploads the images to the target into the /home/root/temp folder on the remote target. Once files are uploaded here the first time, this option is no longer needed. NOTE: this directory needs to be managed manually -- that is, if you wish to validate against other datasets or images, you need to SSH into the remote target and remove the files from this directory.
- This parameter sets the image extension to PNG versus JPG.
Note: Additional command options can be found in Edge Validation Guidelines.
Loading results in TensorBoard
Evaluation SDK also provides an interface capable of producing results in TensorBoard format. To do that, the option --tensorboard needs to be set in the command line. To read the results we are going to use the ModelPack logging System in the following way.
docker run -it -p 9999:6006
(1) -v /data/au-zone/checkpoints/<time-stamp>/logs/<time-stamp-2>:/logs
(2) deepview/tensorboard:latest
(3) --logdir=/logs
- Path to the logs file generated by Evaluation SDK
- Container's name
- Setting the logger to the log path
Conclusions
In this article we have explained how to validate models on a dataset. Two different approaches were explained, Keras model validation and validation of quantized models on a remote target. Also, we have explained how to visualize and read the results.
Previous Step | Home | Next Step |
ModelPack Training Guide | Advanced Model Tuning | ModelPack Overview | ModelPack Quantization and Conversion |
Comments
0 comments
Please sign in to leave a comment.