Adversarial Robustness on In- and Out-Distribution Improves Explainability

Abstract

Neural networks have led to major improvements in image classification but suffer from being non-robust to adversarial changes, unreliable uncertainty estimates on out-distribution samples and their inscrutable black-box decisions. In this work we propose RATIO, a training procedure for Robustness via Adversarial Training on In- and Out-distribution, which leads to robust models with reliable and robust confidence estimates on the out-distribution. RATIO has similar generative properties to adversarial training so that visual counterfactuals produce class specific features. While adversarial training comes at the price of lower clean accuracy, RATIO achieves state-of-the-art $l_2$-adversarial robustness on CIFAR10 and maintains better clean accuracy.

Cite

Text

Augustin et al. "Adversarial Robustness on In- and Out-Distribution Improves Explainability." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58574-7_14

Markdown

[Augustin et al. "Adversarial Robustness on In- and Out-Distribution Improves Explainability." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/augustin2020eccv-adversarial/) doi:10.1007/978-3-030-58574-7_14

BibTeX

@inproceedings{augustin2020eccv-adversarial,
  title     = {{Adversarial Robustness on In- and Out-Distribution Improves Explainability}},
  author    = {Augustin, Maximilian and Meinke, Alexander and Hein, Matthias},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58574-7_14},
  url       = {https://mlanthology.org/eccv/2020/augustin2020eccv-adversarial/}
}