Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-Based Models Reliable?

Abstract

Dirichlet-based uncertainty (DBU) models are a recent and promising class of uncertainty-aware models. DBU models predict the parameters of a Dirichlet distribution to provide fast, high-quality uncertainty estimates alongside with class predictions. In this work, we present the first large-scale, in-depth study of the robustness of DBU models under adversarial attacks. Our results suggest that uncertainty estimates of DBU models are not robust w.r.t. three important tasks: (1) indicating correctly and wrongly classified samples; (2) detecting adversarial examples; and (3) distinguishing between in-distribution (ID) and out-of-distribution (OOD) data. Additionally, we explore the first approaches to make DBU mod- els more robust. While adversarial training has a minor effect, our median smoothing based ap- proach significantly increases robustness of DBU models.

Cite

Text

Kopetzki et al. "Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-Based Models Reliable?." International Conference on Machine Learning, 2021.

Markdown

[Kopetzki et al. "Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-Based Models Reliable?." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/kopetzki2021icml-evaluating/)

BibTeX

@inproceedings{kopetzki2021icml-evaluating,
  title     = {{Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-Based Models Reliable?}},
  author    = {Kopetzki, Anna-Kathrin and Charpentier, Bertrand and Zügner, Daniel and Giri, Sandhya and Günnemann, Stephan},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {5707-5718},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/kopetzki2021icml-evaluating/}
}