Make Some Noise: Reliable and Efficient Single-Step Adversarial Training
Abstract
Recently, Wong et al. (2020) showed that adversarial training with single-step FGSM leads to a characteristic failure mode named catastrophic overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks. Experimentally they showed that simply adding a random perturbation prior to FGSM (RS-FGSM) could prevent CO. However, Andriushchenko & Flammarion (2020) observed that RS-FGSM still leads to CO for larger perturbations, and proposed a computationally expensive regularizer (GradAlign) to avoid it. In this work, we methodically revisit the role of noise and clipping in single-step adversarial training. Contrary to previous intuitions, we find that using a stronger noise around the clean sample combined with \textit{not clipping} is highly effective in avoiding CO for large perturbation radii. We then propose Noise-FGSM (N-FGSM) that, while providing the benefits of single-step adversarial training, does not suffer from CO. Empirical analyses on a large suite of experiments show that N-FGSM is able to match or surpass the performance of previous state of-the-art GradAlign while achieving 3$\times$ speed-up.
Cite
Text
de Jorge Aranda et al. "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training." Neural Information Processing Systems, 2022.Markdown
[de Jorge Aranda et al. "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/dejorgearanda2022neurips-make/)BibTeX
@inproceedings{dejorgearanda2022neurips-make,
title = {{Make Some Noise: Reliable and Efficient Single-Step Adversarial Training}},
author = {de Jorge Aranda, Pau and Bibi, Adel and Volpi, Riccardo and Sanyal, Amartya and Torr, Philip and Rogez, Gregory and Dokania, Puneet},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/dejorgearanda2022neurips-make/}
}