Introduction
Berkeley Diverse Driving, also known as BDD100K, was presented for the first time in the paper: BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning in 2018. This dataset was built to tackle two different problems, mask detection and bounding box detections. BDD100K was imported in DeepView format by the Au-Zone team. At the same time we imported the dataset in DeepView format, we cleaned extremely small objects as well as 0-width and 0-height objects. Now the dataset is shared in Au-Zone Datasets Zoo and has been widely tested over all detection models shared in eIQ Toolkit as well as in ModelPack.
In this dataset there is a high unbalance factor among classes. In the case of the classes Bike, Motor, Rider and Train that do not present sufficient annotations within the dataset. Other than that, this dataset can be used to solve real-world specific application problems.
Use Case
Self autonomous driving is an exciting field of research in computer vision. One of the major challenges in this area is to detect objects in real time while driving the car and use such a predictions in high-level applications capable of predicting undesirable situations (accidents). The following lists some of the common use cases we can solve by using ModelPack and the BDD100K dataset.
- Driver Assistance: Imagine the truck driver driving in difficulty weather conditions (snow, rain, night) where visibility is compromised. This dataset has tons of images with annotated objects at different weather and illumination conditions. By combining ModelPack and BDD100K, you will be able to build a system to inspect the road at 30 frames per-second and notify in case of presence of vehicles.
- Back-up prevention accidents: When driving a car in reverse inside a parking lot, there is a large number of accidents reported because people walk in these areas thinking all the cars are alone. In most of the cases, the driver is not paying attention to the camera and they are not aware about the object behind the car is a person. This dataset will help you to build a system to stop the car immediately after it finds a person walking by the car.
- Counting vehicles: Bridges and highways need to be repair after a defined amount of time since the traffic is not counted across the years. A potential application of this dataset and ModelPack related to counting cars could be to create a heatmap over the road and notify the possible damage once the number of heavy cars (truck and Buses) rolling over the read reaches a threshold.
- Parking-Lot: Compute occupancy grid and know the number of available parking spots. Also, it can be used to control automatic doors (open/close) in some cases.
- Speed Control: It is possible to build a solution based on optical flow and cars detection to check the speed limit at some points in the city (near schools, playgrounds, etc.)
In general, to detect driving objects is a top solution for many real world applications and BDD100K is a very good dataset that can be user for prototyping a solution with only few clicks.
Dataset Stats
Evaluation Results
This dataset was intended to test Object Detection algorithms at the edge. For that reason we summarize some results in Table 2 and Table 3.
Table 2: Evaluation metrics at (mAP @ 0.5). The following results were obtained after training ModelPack over 30 epochs.
Model | Float RTM | Per-Channel-Quantized RTM | Per-Tensor-Quantized RTM |
ModelPack | 45.37% | 39.31 % | 38.27 % |
Table 3: NXP i.MX 8M Plus Inference timings on CPU, GPU and NPU (milliseconds)
Model | Float RTM | Per-Channel-Quantized RTM | Per-Tensor-Quantized RTM |
ModelPack CPU | - | 310.80 | 310.81 |
ModelPack GPU | 3235.88 | 2792.36 | 1065.09 |
ModelPack NPU | 3828.08 | 18.56 |
15.00 |
Conclusions
In this article we have introduced a very useful dataset for solving real world problems. We also exposed some evaluation numbers and checkpoints needed to reproduce results when using ModelPack.
Download
Download dataset as an eIQ Portal project file, along with checkpoints and ready-to-use DeepViewRT models.
Comments
0 comments
Please sign in to leave a comment.