How to Evaluate the Generalization of Detection? a Benchmark for Comprehensive Open-Vocabulary Detection

Abstract

Object detection (OD) in computer vision has made significant progress in recent years, transitioning from closed-set labels to open-vocabulary detection (OVD) based on large-scale vision-language pre-training (VLP). However, current evaluation methods and datasets are limited to testing generalization over object types and referral expressions, which do not provide a systematic, fine-grained, and accurate benchmark of OVD models' abilities. In this paper, we propose a new benchmark named OVDEval, which includes 9 sub-tasks and introduces evaluations on commonsense knowledge, attribute understanding, position understanding, object relation comprehension, and more. The dataset is meticulously created to provide hard negatives that challenge models' true understanding of visual and linguistic input. Additionally, we identify a problem with the popular Average Precision (AP) metric when benchmarking models on these fine-grained label datasets and propose a new metric called Non-Maximum Suppression Average Precision (NMS-AP) to address this issue. Extensive experimental results show that existing top OVD models all fail on the new tasks except for simple object types, demonstrating the value of the proposed dataset in pinpointing the weakness of current OVD models and guiding future research. Furthermore, the proposed NMS-AP metric is verified by experiments to provide a much more truthful evaluation of OVD models, whereas traditional AP metrics yield deceptive results. Data is available at https://github.com/om-ai-lab/OVDEval

Cite

Text

Yao et al. "How to Evaluate the Generalization of Detection? a Benchmark for Comprehensive Open-Vocabulary Detection." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I7.28485

Markdown

[Yao et al. "How to Evaluate the Generalization of Detection? a Benchmark for Comprehensive Open-Vocabulary Detection." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/yao2024aaai-evaluate/) doi:10.1609/AAAI.V38I7.28485

BibTeX

@inproceedings{yao2024aaai-evaluate,
  title     = {{How to Evaluate the Generalization of Detection? a Benchmark for Comprehensive Open-Vocabulary Detection}},
  author    = {Yao, Yiyang and Liu, Peng and Zhao, Tiancheng and Zhang, Qianqian and Liao, Jiajia and Fang, Chunxin and Lee, Kyusong and Wang, Qing},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {6630-6638},
  doi       = {10.1609/AAAI.V38I7.28485},
  url       = {https://mlanthology.org/aaai/2024/yao2024aaai-evaluate/}
}