On Outlier Exposure with Generative Models
Abstract
While Outlier Exposure reliably increases the performance of Out-of-Distribution detectors, it requires a set of available outliers during training. In this paper, we propose Generative Outlier Exposure (GOE), which alleviates the need for available outliers by using generative models to sample synthetic outliers from low-density regions of the data distribution. The approach requires no modification of the generator, works on image and text data, and can be used with pre-trained models. We demonstrate the effectiveness of generated outliers on several image and text datasets, including ImageNet.
Cite
Text
Kirchheim and Ortmeier. "On Outlier Exposure with Generative Models." NeurIPS 2022 Workshops: MLSW, 2022.Markdown
[Kirchheim and Ortmeier. "On Outlier Exposure with Generative Models." NeurIPS 2022 Workshops: MLSW, 2022.](https://mlanthology.org/neuripsw/2022/kirchheim2022neuripsw-outlier/)BibTeX
@inproceedings{kirchheim2022neuripsw-outlier,
title = {{On Outlier Exposure with Generative Models}},
author = {Kirchheim, Konstantin and Ortmeier, Frank},
booktitle = {NeurIPS 2022 Workshops: MLSW},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/kirchheim2022neuripsw-outlier/}
}