Learning to Name Classes for Vision and Language Models

Abstract

Large scale vision and language models can achieve impressive zero-shot recognition performance by mapping class specific text queries to image content. Two distinct challenges that remain however, are high sensitivity to the choice of handcrafted class names that define queries, and the difficulty of adaptation to new, smaller datasets. Towards addressing these problems, we propose to leverage available data to learn, for each class, an optimal word embedding as a function of the visual content. By learning new word embeddings on an otherwise frozen model, we are able to retain zero-shot capabilities for new classes, easily adapt models to new datasets, and adjust potentially erroneous, non-descriptive or ambiguous class names. We show that our solution can easily be integrated in image classification and object detection pipelines, yields significant performance gains in multiple scenarios and provides insights into model biases and labelling errors.

Cite

Text

Parisot et al. "Learning to Name Classes for Vision and Language Models." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.02248

Markdown

[Parisot et al. "Learning to Name Classes for Vision and Language Models." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/parisot2023cvpr-learning/) doi:10.1109/CVPR52729.2023.02248

BibTeX

@inproceedings{parisot2023cvpr-learning,
  title     = {{Learning to Name Classes for Vision and Language Models}},
  author    = {Parisot, Sarah and Yang, Yongxin and McDonagh, Steven},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {23477-23486},
  doi       = {10.1109/CVPR52729.2023.02248},
  url       = {https://mlanthology.org/cvpr/2023/parisot2023cvpr-learning/}
}