Learning Object-Language Alignments for Open-Vocabulary Object Detection

Abstract

Existing object detection methods are bounded in a fixed-set vocabulary by costly labeled data. When dealing with novel categories, the model has to be retrained with more bounding box annotations. Natural language supervision is an attractive alternative for its annotation-free attributes and broader object concepts. However, learning open-vocabulary object detection from language is challenging since image-text pairs do not contain fine-grained object-language alignments. Previous solutions rely on either expensive grounding annotations or distilling classification-oriented vision models. In this paper, we propose a novel open-vocabulary object detection framework directly learning from image-text pair data. We formulate object-language alignment as a set matching problem between a set of image region features and a set of word embeddings. It enables us to train an open-vocabulary object detector on image-text pairs in a much simple and effective way. Extensive experiments on two benchmark datasets, COCO and LVIS, demonstrate our superior performance over the competing approaches on novel categories, e.g. achieving 32.0% mAP on COCO and 21.7% mask mAP on LVIS. Code will be released.

Cite

Text

Lin et al. "Learning Object-Language Alignments for Open-Vocabulary Object Detection." International Conference on Learning Representations, 2023.

Markdown

[Lin et al. "Learning Object-Language Alignments for Open-Vocabulary Object Detection." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/lin2023iclr-learning/)

BibTeX

@inproceedings{lin2023iclr-learning,
  title     = {{Learning Object-Language Alignments for Open-Vocabulary Object Detection}},
  author    = {Lin, Chuang and Sun, Peize and Jiang, Yi and Luo, Ping and Qu, Lizhen and Haffari, Gholamreza and Yuan, Zehuan and Cai, Jianfei},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/lin2023iclr-learning/}
}