Introduction
This article describes some methods to verify the latency of a VisionPack pipeline. These methods do not require special equipment but do have precision limitations which we outline in the Limitations section. The goal of these methods are to do a best effort at measuring the general latencies and identifying potential blind spots. A method to verify latency to a higher degree of precision is outside the scope of this article.
Setup
The setup requires two smart phones, an i.MX 8M Plus with camera and display. Better quality displays and phones will help, though will still have limitations.
The setup involves having the 8M Plus camera pointed at a smart phone running a stop watch with milliseconds shown and streaming the feed to the display. The second phone is used to capture the scene including the first phone and the monitor.
With this setup we can measure the end-to-end latency by comparing the stopwatch from the first phone and from the feed on the display when capturing the photo from the second phone. You can generally expect to get a good read on the tenths of seconds (hundreds of milliseconds) but bellow this things get tricky.
Limitations
There's a number of limitations with this setup. It remains a good system to verify general latency and to compare against internal instrumentation of the pipeline, for example using the VisionPack GStreamer Instrumentation Example.
- A high-end phone can have refresh rates of 120 or more recently 240 Hz but even this high-end refresh rate limits accuracy to 4.16ms assuming the stopwatch application can render at this framerate (which is unlikely).
- On the 8M Plus EVK the OV5640 camera driver is limited to 30 Hz. This limits accuracy to 33.33ms.
- The 8M Plus EVK HDMI driver is limited to 60 Hz which limits accuracy to 16.66ms.
- Camera exposures can cause numbers to blend making it hard to distinguish the exact values when they're changing too quickly for the exposure.
- A high-end phone might be able to semi-reliably capture hundredths of a second.
- The 8M Plus display is only reliable to tenths of a second.
- All these rate limits can be summed up in the worst case.
Results
With the above limitations noted we now present some measurements of this setup with generic GStreamer pipeline and the VPKUI OpenGL demo application with and without a model running along with the internal instrumentations.
GStreamer
This first image shows the latency when running the generic GStreamer pipeline below.
gst-launch-1.0 v4l2src ! video/x-raw,width-1920,height=1080 ! waylandsink
We see clearly the phone reporting 18.400 seconds while the EVK appears to be roughly 18.200 (possibly .280 but hard to say for certain). The latency is likely in the 120-200ms range.
VPKUI (OpenGL)
Next is the OpenGL-based VPKUI demo application running without a model loaded showing similar latency. In this case the number on the monitor is slightly more visible, latency appears to be 134ms.
VPKUI with Model
Finally we run the same VPKUI demo application but this time with a model loaded. The model is shown on screen as taking about 17ms for load frame and inference. From the photo it appears the latency is likely around 143ms if we read the EVK output as 1.337.
Conclusion
While this method cannot provide high accuracy it does confirm to a reasonable degree that the VisionPack pipeline is not adding significant latency beyond model processing time, to the generic camera pipeline on the i.MX 8M Plus EVK.
Comments
0 comments
Please sign in to leave a comment.