Delving into Out-of-Distribution Detection with Vision-Language Representations

Abstract

Recognizing out-of-distribution (OOD) samples is critical for machine learning systems deployed in the open world. The vast majority of OOD detection methods are driven by a single modality (e.g., either vision or language), leaving the rich information in multi-modal representations untapped. Inspired by the recent success of vision-language pre-training, this paper enriches the landscape of OOD detection from a single-modal to a multi-modal regime. Particularly, we propose Maximum Concept Matching (MCM), a simple yet effective zero-shot OOD detection method based on aligning visual features with textual concepts. We contribute in-depth analysis and theoretical insights to understand the effectiveness of MCM. Extensive experiments demonstrate that MCM achieves superior performance on a wide variety of real-world tasks. MCM with vision-language features outperforms a common baseline with pure visual features on a hard OOD task with semantically similar classes by 13.1% (AUROC) Code is available at https://github.com/deeplearning-wisc/MCM.

Cite

Text

Ming et al. "Delving into Out-of-Distribution Detection with Vision-Language Representations." Neural Information Processing Systems, 2022.

Markdown

[Ming et al. "Delving into Out-of-Distribution Detection with Vision-Language Representations." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/ming2022neurips-delving/)

BibTeX

@inproceedings{ming2022neurips-delving,
  title     = {{Delving into Out-of-Distribution Detection with Vision-Language Representations}},
  author    = {Ming, Yifei and Cai, Ziyang and Gu, Jiuxiang and Sun, Yiyou and Li, Wei and Li, Yixuan},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/ming2022neurips-delving/}
}