Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning
Abstract
Recent works have shown that self-supervised learning can achieve remarkable robustness when integrated with adversarial training (AT). However, the robustness gap between supervised AT (sup-AT) and self-supervised AT (self-AT) remains significant. Motivated by this observation, we revisit existing self-AT methods and discover an inherent dilemma that affects self-AT robustness: either strong or weak data augmentations are harmful to self-AT, and a medium strength is insufficient to bridge the gap. To resolve this dilemma, we propose a simple remedy named DYNACL (Dynamic Adversarial Contrastive Learning). In particular, we propose an augmentation schedule that gradually anneals from a strong augmentation to a weak one to benefit from both extreme cases. Besides, we adopt a fast post-processing stage for adapting it to downstream tasks. Through extensive experiments, we show that DYNACL can improve state-of-the-art self-AT robustness by 8.84% under Auto-Attack on the CIFAR-10 dataset, and can even outperform vanilla supervised adversarial training for the first time. Our code is available at \url{https://github.com/PKU-ML/DYNACL}.
Cite
Text
Luo et al. "Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning." International Conference on Learning Representations, 2023.Markdown
[Luo et al. "Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/luo2023iclr-rethinking/)BibTeX
@inproceedings{luo2023iclr-rethinking,
title = {{Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning}},
author = {Luo, Rundong and Wang, Yifei and Wang, Yisen},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/luo2023iclr-rethinking/}
}