A Closer Look at Curriculum Adversarial Training: From an Online Perspective

Abstract

Curriculum adversarial training empirically finds that gradually increasing the hardness of adversarial examples can further improve the adversarial robustness of the trained model compared to conventional adversarial training. However, theoretical understanding of this strategy remains limited. In an attempt to bridge this gap, we analyze the adversarial training process from an online perspective. Specifically, we treat adversarial examples in different iterations as samples from different adversarial distributions. We then introduce the time series prediction framework and deduce novel generalization error bounds. Our theoretical results not only demonstrate the effectiveness of the conventional adversarial training algorithm but also explain why curriculum adversarial training methods can further improve adversarial generalization. We conduct comprehensive experiments to support our theory.

Cite

Text

Shi and Liu. "A Closer Look at Curriculum Adversarial Training: From an Online Perspective." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I13.29418

Markdown

[Shi and Liu. "A Closer Look at Curriculum Adversarial Training: From an Online Perspective." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/shi2024aaai-closer/) doi:10.1609/AAAI.V38I13.29418

BibTeX

@inproceedings{shi2024aaai-closer,
  title     = {{A Closer Look at Curriculum Adversarial Training: From an Online Perspective}},
  author    = {Shi, Lianghe and Liu, Weiwei},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {14973-14981},
  doi       = {10.1609/AAAI.V38I13.29418},
  url       = {https://mlanthology.org/aaai/2024/shi2024aaai-closer/}
}