Adversarial Supervised Contrastive Learning

Abstract

Contrastive learning is prevalently used in pre-training deep models, followed with fine-tuning in downstream tasks for better performance or faster training. However, pre-trained models from contrastive learning are barely robust against adversarial examples in downstream tasks since the representations learned by self-supervision may lack the robustness and also the class-wise discrimination. To tackle the above problems, we adapt the contrastive learning scheme to adversarial examples for robustness enhancement, and also extend the self-supervised contrastive approach to the supervised setting for the ability to discriminate on classes. Equipped with our new designs, we proposed adversarial supervised contrastive learning (ASCL), a novel framework for robust pre-training. Despite its simplicity, extensive experiments show that ASCL achieves significant margins in adversarial robustness over the prior arts, proceeding towards either the lightweight standard fine-tuning or adversarial fine-tuning. Moreover, ASCL also shows benefits for robustness to diverse natural corruptions, suggesting the wide applicability to all sorts of practical scenarios. Notably, ASCL demonstrate impressive results in robust transfer learning.

Cite

Text

Li et al. "Adversarial Supervised Contrastive Learning." Machine Learning, 2023. doi:10.1007/S10994-022-06269-7

Markdown

[Li et al. "Adversarial Supervised Contrastive Learning." Machine Learning, 2023.](https://mlanthology.org/mlj/2023/li2023mlj-adversarial/) doi:10.1007/S10994-022-06269-7

BibTeX

@article{li2023mlj-adversarial,
  title     = {{Adversarial Supervised Contrastive Learning}},
  author    = {Li, Zhuorong and Yu, Daiwei and Wu, Minghui and Jin, Canghong and Yu, Hongchuan},
  journal   = {Machine Learning},
  year      = {2023},
  pages     = {2105-2130},
  doi       = {10.1007/S10994-022-06269-7},
  volume    = {112},
  url       = {https://mlanthology.org/mlj/2023/li2023mlj-adversarial/}
}