A2: Efficient Automated Attacker for Boosting Adversarial Training

Abstract

Based on the significant improvement of model robustness by AT (Adversarial Training), various variants have been proposed to further boost the performance. Well-recognized methods have focused on different components of AT (e.g., designing loss functions and leveraging additional unlabeled data). It is generally accepted that stronger perturbations yield more robust models.However, how to generate stronger perturbations efficiently is still missed. In this paper, we propose an efficient automated attacker called A2 to boost AT by generating the optimal perturbations on-the-fly during training. A2 is a parameterized automated attacker to search in the attacker space for the best attacker against the defense model and examples. Extensive experiments across different datasets demonstrate that A2 generates stronger perturbations with low extra cost and reliably improves the robustness of various AT methods against different attacks.

Cite

Text

Xu et al. "A2: Efficient Automated Attacker for Boosting Adversarial Training." Neural Information Processing Systems, 2022.

Markdown

[Xu et al. "A2: Efficient Automated Attacker for Boosting Adversarial Training." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/xu2022neurips-a2/)

BibTeX

@inproceedings{xu2022neurips-a2,
  title     = {{A2: Efficient Automated Attacker for Boosting Adversarial Training}},
  author    = {Xu, Zhuoer and Zhu, Guanghui and Meng, Changhua and Cui, Shiwen and Ying, Zhenzhe and Wang, Weiqiang and Gu, Ming and Huang, Yihua},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/xu2022neurips-a2/}
}