Understanding Measures of Uncertainty for Adversarial Example Detection
Abstract
Measuring uncertainty is a promising technique for detecting adversarial examples, crafted inputs on which the model predicts an incorrect class with high confidence. But many measures of uncertainty exist, including predictive en- tropy and mutual information, each capturing different types of uncertainty. We study these measures, and shed light on why mutual information seems to be effective at the task of adversarial example detection. We highlight failure modes for MC dropout, a widely used approach for estimating uncertainty in deep models. This leads to an improved understanding of the drawbacks of current methods, and a proposal to improve the quality of uncertainty estimates using probabilistic model ensembles. We give illustrative experiments using MNIST to demonstrate the intuition underlying the different measures of uncertainty, as well as experiments on a real world Kaggle dogs vs cats classification dataset.
Cite
Text
Smith and Gal. "Understanding Measures of Uncertainty for Adversarial Example Detection." Conference on Uncertainty in Artificial Intelligence, 2018.Markdown
[Smith and Gal. "Understanding Measures of Uncertainty for Adversarial Example Detection." Conference on Uncertainty in Artificial Intelligence, 2018.](https://mlanthology.org/uai/2018/smith2018uai-understanding/)BibTeX
@inproceedings{smith2018uai-understanding,
title = {{Understanding Measures of Uncertainty for Adversarial Example Detection}},
author = {Smith, Lewis and Gal, Yarin},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2018},
pages = {560-569},
url = {https://mlanthology.org/uai/2018/smith2018uai-understanding/}
}