Deep View ModelPack for Detection provides a state-of-the-art detection model featuring real-time performance on i.MX 8M platforms. It is especially well optimized for the NPU found on this platform and has been fine tuned to support full per-tensor quantization while preserving high accuracy in comparison with the float model. ModelPack is fully compatible with eIQ Toolkit 1.4 which provide an easy-to-use graphical interface for creating datasets, training, model validation, and deploying optimized models to i.MX 8M platforms.
Note: It is recommended to read this guidelines in parallel with ModelPack Quick Start Guide
System Requirements
- eIQ Toolkit v1.4 or higher
- Internet connection required during trial period (not required by full version)
- CUDA GPU is recommended but not required to increase training performance.
How do I get set up?
-
ModelPack is introduced for first time in eIQ Portal 2.5 as a fully and self-contained extension. This feature is shared under the DEEPVIEW MODELPACK ADD ON section
-
This package does not require any additional installation, simply select start trial or import an existing license to get started immediately.
-
ModelPack provides a rich set of parameters to allow easy tuning of the model's architecture. The following parameters are exposed for fine tuning.
-
BatchNormalization: Like the default Keras BatchNormalization layer, this will normalize the activations of the previous layer at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1. By setting the parameter to
trainable
, BatchNormalization layers will adopt the default behaviour from keras library. On the other hand, if the parameter receives thefrozen
state, the layer will normalize the input using the moving average and standard deviation, without updating the learned average and standard deviation. In other words, normalization will always be applied no matter the value of the parameter, but average and standard deviation will only updated if the parameter receives thetrainable
value.
While training a model for a high number of iterations, it is needed to introduce variations in batch statistics. Such a variations act as a regularization mechanism that can improve generalization capabilities and hence minimize overfitting. Indeed, using a very large batch size can harm generalization as there is less variation in batch statistics, decreasing regularization [paper].
When fine-tuning a model on a new dataset, it happens that batch statistics are likely to be very different if fine-tuning examples have different characteristics to examples in the original training dataset. Therefore, if batch normalization is not frozen, the network will learn new batch normalization parameters (gamma and beta) that are different to what the other network paramaters have been optimised for during the original training, producing the undesired effect of relearning everything (previous knowledge will be thrown). To keep this parameter frozen, avoid this issue [paper].
In ModelPack, this parameter is
trainable
by default.![img.png](images/manual/001.png)
-
Freeze Backbone: This parameter indicates that ModelPack will only involve regression headers in the training process, keeping the trunk of the model out of the learning graph. This parameter accepts either of these two different values:
True
orFalse
. By setting the parameter toTrue
, it means that variables from the trunk of the model won't be included into the learning process (no update for these variables). In this way, the model will train faster since a lower number of variables will be updated throughout the gradients. On the other hand, if the parameter is set toFalse
, the training process will update all the variables within the model.
If the user decides to freeze the backbone, it is needed to train the model by using two-stages. The first stage will be used to train the last part of the model (detection headers or regressors). At the end of stage one, resulting model needs to be used to initialize the weights of the new training session (second stage) where the model will be fully trained, but this time using a very small lr (e.g
lr=1e-6
). For more information about two-stage training, see ModelPack Two Stage Training Guidelines- Allowed Boxes: When training a mode, to build the ground truth properly could be a hard task since predictions need to be compared against valid values. To create Object Detection ground truth could be overwhelming and tedious. In order to keep this important step as simple as possible, ModelPack exposes one additional parameter to limit the maximum number of allowed boxes per-image. The higher this number, the more memory the training will request. For that reason it is recommended to keep this number as closer as possible to the real value (maximum number of annotations per-image in our dataset).
This parameter accepts either of the following values: [80, 100, 120, 150, 200]. Notice that 150 is the default value
- Optimizer: To choose the best optimizer we experimented over a vast list of options (SGD, Nadam, AdaDelta, etc.). The best results and speed in convergence was always achieved by Adam and for that reason is the only available option.
-
BatchNormalization: Like the default Keras BatchNormalization layer, this will normalize the activations of the previous layer at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1. By setting the parameter to
-
To start evaluating ModelPack we make available a diverse set of eIQ Portal projects which can be downloaded from the Au-Zone Datasets Zoo. These datasets have been prepared for use in eIQ Portal and include validated checkpoints for ModelPack which are ready to use on i.MX 8M platforms using VisionPack.
Key Benefits
- Fully supported package 100% compatible with eIQ Portal technologies
- Full integer quantization (per-tensor and per-channel)
- Highly specialized anchors generation (automatically)
- Minimal drop in accuracy after quantization, including per-tensor quantized.
- High convergence speed (reduce training times)
- Full support on Vision Pack
- Extended maintenance for licensed user
Commercial Support
- Contact Au-Zone Technologies at:
- Au-Zone Information Contact at info@au-zone.com or Au-Zone Support Contact support@au-zone.com
- Address: 302 - 1240 20 Ave SE, Calgary AB T2G 1M8
- Telephone: (+1) 403-261-9985
- Web Site: Au-Zone
- Contact Page: Contact Us
Comments
0 comments
Please sign in to leave a comment.