Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks

Abstract

Adversarial training (AT) with imperfect supervision is significant but receives limited attention. To push AT towards more practical scenarios, we explore a brand new yet challenging setting, i.e., AT with complementary labels (CLs), which specify a class that a data sample does not belong to. However, the direct combination of AT with existing methods for CLs results in consistent failure, but not on a simple baseline of two-stage training. In this paper, we further explore the phenomenon and identify the underlying challenges of AT with CLs as intractable adversarial optimization and low-quality adversarial examples. To address the above problems, we propose a new learning strategy using gradually informative attacks, which consists of two critical components: 1) Warm-up Attack (Warm-up) gently raises the adversarial perturbation budgets to ease the adversarial optimization with CLs; 2) Pseudo-Label Attack (PLA) incorporates the progressively informative model predictions into a corrected complementary loss. Extensive experiments are conducted to demonstrate the effectiveness of our method on a range of benchmarked datasets. The code is publicly available at: https://github.com/RoyalSkye/ATCL.

Cite

Text

Zhou et al. "Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks." Neural Information Processing Systems, 2022.

Markdown

[Zhou et al. "Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/zhou2022neurips-adversarial/)

BibTeX

@inproceedings{zhou2022neurips-adversarial,
  title     = {{Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks}},
  author    = {Zhou, Jianan and Zhu, Jianing and Zhang, Jingfeng and Liu, Tongliang and Niu, Gang and Han, Bo and Sugiyama, Masashi},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/zhou2022neurips-adversarial/}
}