Introduction
This article will show a small example of the metric computation in Deepview-Validator to corroborate the results and verify that the metric computations are correct.
Prerequisites
- Familiarity with how the metrics is calculated from DeepView-Validator Metric Computations.
Discussion
Sample Results
Consider the following validation report:
Consider the following image results from the validation report:
Image 1:
In this image there are 4 ground truths, 3 true positives for class 'three', 1 classification false positive for class 'ace', 1 localization false positive for class 'three', and 1 false negative for class 'three'.
Image 2:
In this image there are 4 ground truths, 2 true positives for class 'three', two classification false positives for class 'five', and 2 false negatives for class 'three'.
Image 3:
In this image there are 4 ground truths and 4 true positives for class 'eight'.
Image 4:
In this image there are 4 ground truths and 4 true positives for class 'eight'.
Image 5:
In this image there are 4 ground truths and 4 true positives for class 'jack'.
Analysis
It is clear from the images that regardless of class, there are 20 ground truths (20 playing cards), 17 true positives, 0 false negatives, 1 localization false positive, and 3 classification false positives.
The overall metrics are calculated as follows:
Note: FPall = localization FP (FPl) + classification FP (FPc).
The mean average metrics are calculated as follows:
Note: since all calculated IoUs are greater than 0.75, then the metrics mAP 0.75, mAR 0.75, and mACC 0.75 are the same as the metrics calculated above.
Conclusion
In this article, an example was shown on how to calculate DeepView-Validator report metrics. For reference, the results containing the images and the JSON are attached in this article.
Comments
0 comments
Please sign in to leave a comment.