In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation

Abstract

Out-of-distribution (OOD) detection is the problem of identifying inputs which are unrelated to the in-distribution task. The OOD detection performance when the in-distribution (ID) is ImageNet-1K is commonly being tested on a small range of test OOD datasets. We find that most of the currently used test OOD datasets, including datasets from the open set recognition (OSR) literature, have severe issues: In some cases more than 50$%$ of the dataset contains objects belonging to one of the ID classes. These erroneous samples heavily distort the evaluation of OOD detectors. As a solution, we introduce with NINCO a novel test OOD dataset, each sample checked to be ID free, which with its fine-grained range of OOD classes allows for a detailed analysis of an OOD detector’s strengths and failure modes, particularly when paired with a number of synthetic “OOD unit-tests”. We provide detailed evaluations across a large set of architectures and OOD detection methods on NINCO and the unit-tests, revealing new insights about model weaknesses and the effects of pretraining on OOD detection performance. We provide code and data at https://github.com/j-cb/NINCO.

Cite

Text

Bitterwolf et al. "In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation." International Conference on Machine Learning, 2023.

Markdown

[Bitterwolf et al. "In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/bitterwolf2023icml-out/)

BibTeX

@inproceedings{bitterwolf2023icml-out,
  title     = {{In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation}},
  author    = {Bitterwolf, Julian and Müller, Maximilian and Hein, Matthias},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {2471-2506},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/bitterwolf2023icml-out/}
}