Understanding Catastrophic Overfitting in Single-Step Adversarial Training

Abstract

Although fast adversarial training has demonstrated both robustness and efficiency, the problem of "catastrophic overfitting" has been observed. This is a phenomenon in which, during single-step adversarial training, the robust accuracy against projected gradient descent (PGD) suddenly decreases to 0% after a few epochs, whereas the robust accuracy against fast gradient sign method (FGSM) increases to 100%. In this paper, we demonstrate that catastrophic overfitting is very closely related to the characteristic of single-step adversarial training which uses only adversarial examples with the maximum perturbation, and not all adversarial examples in the adversarial direction, which leads to decision boundary distortion and a highly curved loss surface. Based on this observation, we propose a simple method that not only prevents catastrophic overfitting, but also overrides the belief that it is difficult to prevent multi-step adversarial attacks with single-step adversarial training.

Cite

Text

Kim et al. "Understanding Catastrophic Overfitting in Single-Step Adversarial Training." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I9.16989

Markdown

[Kim et al. "Understanding Catastrophic Overfitting in Single-Step Adversarial Training." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/kim2021aaai-understanding/) doi:10.1609/AAAI.V35I9.16989

BibTeX

@inproceedings{kim2021aaai-understanding,
  title     = {{Understanding Catastrophic Overfitting in Single-Step Adversarial Training}},
  author    = {Kim, Hoki and Lee, Woojin and Lee, Jaewook},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {8119-8127},
  doi       = {10.1609/AAAI.V35I9.16989},
  url       = {https://mlanthology.org/aaai/2021/kim2021aaai-understanding/}
}