Out-of-Distribution Detection with Adversarial Outlier Exposure

Abstract

Machine learning models typically perform reliably only on inputs drawn from the distribution they were trained on, making Out-of-Distribution (OOD) detection essential for safety-critical applications. While exposing models to example outliers during training is one of the most effective ways to enhance OOD detection, recent studies suggest that synthetically generated outliers can also act as regularizers for deep neural networks. In this paper, we propose an augmentation scheme for synthetic outliers that regularizes a classifier's energy function by adversarially lowering the outliers' energy during training. We demonstrate that our method improves OOD detection performance and improves adversarial robustness on OOD data on several image classification benchmarks. Additionally, we show that our approach preserves in-distribution generalization. Our code is publicly available.

Cite

Text

Botschen et al. "Out-of-Distribution Detection with Adversarial Outlier Exposure." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2025.

Markdown

[Botschen et al. "Out-of-Distribution Detection with Adversarial Outlier Exposure." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2025.](https://mlanthology.org/cvprw/2025/botschen2025cvprw-outofdistribution/)

BibTeX

@inproceedings{botschen2025cvprw-outofdistribution,
  title     = {{Out-of-Distribution Detection with Adversarial Outlier Exposure}},
  author    = {Botschen, Thomas and Kirchheim, Konstantin and Ortmeier, Frank},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2025},
  pages     = {4391-4400},
  url       = {https://mlanthology.org/cvprw/2025/botschen2025cvprw-outofdistribution/}
}