Towards Maximizing the Representation Gap Between In-Domain & Out-of-Distribution Examples
Abstract
Among existing uncertainty estimation approaches, Dirichlet Prior Network (DPN) distinctly models different predictive uncertainty types. However, for in-domain examples with high data uncertainties among multiple classes, even a DPN model often produces indistinguishable representations from the out-of-distribution (OOD) examples, compromising their OOD detection performance. We address this shortcoming by proposing a novel loss function for DPN to maximize the representation gap between in-domain and OOD examples. Experimental results demonstrate that our proposed approach consistently improves OOD detection performance.
Cite
Text
Nandy et al. "Towards Maximizing the Representation Gap Between In-Domain & Out-of-Distribution Examples." Neural Information Processing Systems, 2020.Markdown
[Nandy et al. "Towards Maximizing the Representation Gap Between In-Domain & Out-of-Distribution Examples." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/nandy2020neurips-maximizing/)BibTeX
@inproceedings{nandy2020neurips-maximizing,
title = {{Towards Maximizing the Representation Gap Between In-Domain & Out-of-Distribution Examples}},
author = {Nandy, Jay and Hsu, Wynne and Lee, Mong Li},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/nandy2020neurips-maximizing/}
}