Why Object Detectors Fail: Investigating the Influence of the Dataset
Abstract
A false negative in object detection describes an object that was not correctly localised and classified by a detector. In prior work, we introduced five ‘false negative mechanisms’ that identify the specific component inside the detector architecture that failed to detect the object. Using these mechanisms, we explore how different computer vision datasets and their inherent characteristics can influence object detector failures. Specifically, we investigate the false negative mechanisms of Faster R-CNN and RetinaNet across five computer vision datasets, namely Microsoft COCO, Pascal VOC, ExDark, ObjectNet, and COD10K. Our results show that object size and class influence the false negative mechanisms of object detectors. We also show that comparing the false negative mechanisms of a single object class across different datasets can highlight potentially unknown biases in datasets.
Cite
Text
Miller et al. "Why Object Detectors Fail: Investigating the Influence of the Dataset." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022. doi:10.1109/CVPRW56347.2022.00529Markdown
[Miller et al. "Why Object Detectors Fail: Investigating the Influence of the Dataset." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022.](https://mlanthology.org/cvprw/2022/miller2022cvprw-object/) doi:10.1109/CVPRW56347.2022.00529BibTeX
@inproceedings{miller2022cvprw-object,
title = {{Why Object Detectors Fail: Investigating the Influence of the Dataset}},
author = {Miller, Dimity and Goode, Georgia and Bennie, Callum and Moghadam, Peyman and Jurdak, Raja},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2022},
pages = {4822-4829},
doi = {10.1109/CVPRW56347.2022.00529},
url = {https://mlanthology.org/cvprw/2022/miller2022cvprw-object/}
}