How to Train Your VIT for OOD Detection

Abstract

VisionTransformers have been shown to be powerful out-of-distribution detectors for ImageNet-scale settings when finetuned from publicly available checkpoints, often outperforming other model types on popular benchmarks. In this work, we investigate the impact of both the pretraining and finetuning scheme on the performance of ViTs on this task by analyzing a large pool of models. We find that the exact type of pretraining has a strong impact on which method works well and on OOD detection performance in general. We further show that certain training schemes might only be effective for a specific type of out-distribution, but not in general, and identify a best-practice training recipe.

Cite

Text

Müller and Hein. "How to Train Your VIT for OOD Detection." ICLR 2024 Workshops: R2-FM, 2024.

Markdown

[Müller and Hein. "How to Train Your VIT for OOD Detection." ICLR 2024 Workshops: R2-FM, 2024.](https://mlanthology.org/iclrw/2024/muller2024iclrw-train/)

BibTeX

@inproceedings{muller2024iclrw-train,
  title     = {{How to Train Your VIT for OOD Detection}},
  author    = {Müller, Maximilian and Hein, Matthias},
  booktitle = {ICLR 2024 Workshops: R2-FM},
  year      = {2024},
  url       = {https://mlanthology.org/iclrw/2024/muller2024iclrw-train/}
}