iDECODe: In-Distribution Equivariance for Conformal Out-of-Distribution Detection
Abstract
Machine learning methods such as deep neural networks (DNNs), despite their success across different domains, are known to often generate incorrect predictions with high confidence on inputs outside their training distribution. The deployment of DNNs in safety-critical domains requires detection of out-of-distribution (OOD) data so that DNNs can abstain from making predictions on those. A number of methods have been recently developed for OOD detection, but there is still room for improvement. We propose the new method iDECODe, leveraging in-distribution equivariance for conformal OOD detection. It relies on a novel base non-conformity measure and a new aggregation method, used in the inductive conformal anomaly detection framework, thereby guaranteeing a bounded false detection rate. We demonstrate the efficacy of iDECODe by experiments on image and audio datasets, obtaining state-of-the-art results. We also show that iDECODe can detect adversarial examples. Code, pre-trained models, and data are available at https://github.com/ramneetk/iDECODe.
Cite
Text
Kaur et al. "iDECODe: In-Distribution Equivariance for Conformal Out-of-Distribution Detection." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I7.20670Markdown
[Kaur et al. "iDECODe: In-Distribution Equivariance for Conformal Out-of-Distribution Detection." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/kaur2022aaai-idecode/) doi:10.1609/AAAI.V36I7.20670BibTeX
@inproceedings{kaur2022aaai-idecode,
title = {{iDECODe: In-Distribution Equivariance for Conformal Out-of-Distribution Detection}},
author = {Kaur, Ramneet and Jha, Susmit and Roy, Anirban and Park, Sangdon and Dobriban, Edgar and Sokolsky, Oleg and Lee, Insup},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2022},
pages = {7104-7114},
doi = {10.1609/AAAI.V36I7.20670},
url = {https://mlanthology.org/aaai/2022/kaur2022aaai-idecode/}
}