Scaling Open-Vocabulary Object Detection
Abstract
Open-vocabulary object detection has benefited greatly from pretrained vision-language models, but is still limited by the amount of available detection training data. While detection training data can be expanded by using Web image-text pairs as weak supervision, this has not been done at scales comparable to image-level pretraining. Here, we scale up detection data with self-training, which uses an existing detector to generate pseudo-box annotations on image-text pairs. Major challenges in scaling self-training are the choice of label space, pseudo-annotation filtering, and training efficiency. We present the OWLv2 model and OWL-ST self-training recipe, which address these challenges. OWLv2 surpasses the performance of previous state-of-the-art open-vocabulary detectors already at comparable training scales (~10M examples). However, with OWL-ST, we can scale to over 1B examples, yielding further large improvement: With an L/14 architecture, OWL-ST improves AP on LVIS rare classes, for which the model has seen no human box annotations, from 31.2% to 44.6% (43% relative improvement). OWL-ST unlocks Web-scale training for open-world localization, similar to what has been seen for image classification and language modelling. Code and checkpoints are available on GitHub.
Cite
Text
Minderer et al. "Scaling Open-Vocabulary Object Detection." Neural Information Processing Systems, 2023.Markdown
[Minderer et al. "Scaling Open-Vocabulary Object Detection." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/minderer2023neurips-scaling/)BibTeX
@inproceedings{minderer2023neurips-scaling,
title = {{Scaling Open-Vocabulary Object Detection}},
author = {Minderer, Matthias and Gritsenko, Alexey and Houlsby, Neil},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/minderer2023neurips-scaling/}
}