Can Adversarial Training Be Manipulated by Non-Robust Features?

Abstract

Adversarial training, originally designed to resist test-time adversarial examples, has shown to be promising in mitigating training-time availability attacks. This defense ability, however, is challenged in this paper. We identify a novel threat model named stability attack, which aims to hinder robust availability by slightly manipulating the training data. Under this threat, we show that adversarial training using a conventional defense budget $\epsilon$ provably fails to provide test robustness in a simple statistical setting, where the non-robust features of the training data can be reinforced by $\epsilon$-bounded perturbation. Further, we analyze the necessity of enlarging the defense budget to counter stability attacks. Finally, comprehensive experiments demonstrate that stability attacks are harmful on benchmark datasets, and thus the adaptive defense is necessary to maintain robustness.

Cite

Text

Tao et al. "Can Adversarial Training Be Manipulated by Non-Robust Features?." Neural Information Processing Systems, 2022.

Markdown

[Tao et al. "Can Adversarial Training Be Manipulated by Non-Robust Features?." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/tao2022neurips-adversarial/)

BibTeX

@inproceedings{tao2022neurips-adversarial,
  title     = {{Can Adversarial Training Be Manipulated by Non-Robust Features?}},
  author    = {Tao, Lue and Feng, Lei and Wei, Hongxin and Yi, Jinfeng and Huang, Sheng-Jun and Chen, Songcan},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/tao2022neurips-adversarial/}
}