VisionPack introduces two applications useful to test the model and see the results. detect application is the compiled version of detect.c (visionpack-latest/samples/detect.c) and it is mainly focused on performing inference and printing timings and inferenced boxes to the command line. However, detecgl is a more complicated application that render on a display the results using OpenGL acceleration (Vehicles and Pedestrians Detection).
- i.MX 8M Plus EVK
- VisionPack properly installed in the system (Quick Start Guide)
Installation and Configuration
- Connect to your EVK by using any SSH client.
- Go to your home directory and install VisionPack (Quick Start Guide)
- Activate visionpack environment
- Run a sample command
Step 4 should return the following output:
over the following input image (000000000641.jpg):
To run detect application use the following command on the attached image (coco128 dataset is also included into visionpack-latest/dataset)
Results will be like this:
[box] label (scr%): xmin ymin xmax ymax [ load infer boxes]
visionpack-latest/datasets/coco128/000000000641.jpg [ 138.75 16.15 4.44]
[ 0] person ( 73%): 0.88 0.55 0.91 0.67
[ 1] bus ( 95%): 0.15 0.23 0.87 0.82
Results in the command line are printed in the following order:
- box id
- score (%)
- [ loadinferboxes]
- load time (load)
- inference time (infer)
- decoding time + NMS (boxes)
To visualize the results, use the following script on the input image:
img = cv2.imread("000000000641.jpg")
H, W, _ = img.shape
bboxes = [
[0.88, 0.55, 0.91, 0.67],
[0.15, 0.23, 0.87, 0.82]
labels = ["person", "bus"]
for label, box in zip(labels, bboxes):
if label == "person":
color = (255, 0, 0) # BGR format
color = (0, 255, 0) # BGR format
x1 = int(box * W)
x2 = int(box * W)
y1 = int(box * H)
y2 = int(box * H)
cv2.rectangle(img, (x1, y1), (x2, y2), color, 2)
DetectGL is an graphical OpenGL-based version of detect which you can download from the DeepView AI Application Zoo. The application shares the same options as the detect application, plus additional parameters which can be listed by calling detectgl --help.
Note: the application currently requires a display to operate, please inquire about upcoming off-screen version with remote streaming.
The command above will raise a full-screen application showing in the display the overlay boxes. The used model was trained on COCO dataset and has 80 classes. At this time, the user can stand in front of the camera and see the results drawn in the application. Also, a summary about timing information is provided in a side panel.
In this article we have explained how to interpret the results returned by detect application. In addition to this, we provide a sample script capable of plotting the results in the input image. Notice the values in the terminal should be copied in the python script.
Please sign in to leave a comment.