On Adaptive Attacks to Adversarial Example Defenses
Abstract
Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to adversarial examples. We find, however, that typical adaptive evaluations are incomplete. We demonstrate that 13 defenses recently published at ICLR, ICML and NeurIPS---and which illustrate a diverse set of defense strategies---can be circumvented despite attempting to perform evaluations using adaptive attacks.
Cite
Text
Tramer et al. "On Adaptive Attacks to Adversarial Example Defenses." Neural Information Processing Systems, 2020.Markdown
[Tramer et al. "On Adaptive Attacks to Adversarial Example Defenses." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/tramer2020neurips-adaptive/)BibTeX
@inproceedings{tramer2020neurips-adaptive,
title = {{On Adaptive Attacks to Adversarial Example Defenses}},
author = {Tramer, Florian and Carlini, Nicholas and Brendel, Wieland and Madry, Aleksander},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/tramer2020neurips-adaptive/}
}