Introduction
This example demonstrates using the VAAL plugins for GStreamer to run a detection model along with on-screen overlays of the detected objects and their labels. The example is provided as a pure command-line example, we also provide C and Python examples in the following articles.
- Detection from C
- Detection from Python
Example
The command-line version will use the default i.MX8M Plus EVK camera (video3) and render the video along with the bounding box overlays to the HDMI display. Before attempting to run the application ensure you have the camera and display correctly configured.
gst-launch-1.0 v4l2src device=/dev/video3 ! video/x-raw,width=640,height=480 ! tee name=camera ! queue leaky=2 ! deepviewrt model=centernet_256x512_cards.rtm ! boxdecode thr=0.02 ! overlay. camera. ! queue ! boxoverlay name=overlay ! autovideosink
Models
This example includes a few models trained on the playing cards dataset. For additional models, including those trained on alternate datasets, please refer to our Model Zoo.
The following table shows the metrics for the provided models. All models were trained at 320x320 input resolution. To re-train these models using eIQ Portal please refer to Model Pack.
Model | mAP@0.50 | Input Time | Inference Time | Decode Time |
Mobilenet V3 Small SSD | 42% | 0.86ms | 4.46ms | 2.48ms |
YOLOv4 Tiny* | 56% | 0.82ms | 8.66ms | 3.14ms |
CenterNet R18* | 76% | 0.88ms | 11.27ms | 7.74ms |
* YOLO and CenterNet can be re-trained on custom datasets using Model Pack.
Comments
0 comments
Please sign in to leave a comment.