Rethinking Robust Contrastive Learning from the Adversarial Perspective

Abstract

To advance the understanding of robust deep learning, we delve into the effects of adversarial training on self-supervised and supervised contrastive learning, alongside supervised learning. Our analysis uncovers significant disparities between adversarial and clean representations in standard-trained networks, across various learning algorithms. Remarkably, adversarial training mitigates these disparities and fosters the convergence of representations toward a universal set, regardless of the learning scheme used. Additionally, we observe that increasing the similarity between adversarial and clean representations, particularly near the end of the network, enhances network robustness. These findings offer valuable insights for designing and training effective and robust deep learning networks.

Cite

Text

Ghofrani et al. "Rethinking Robust Contrastive Learning from the Adversarial Perspective." ICML 2023 Workshops: AdvML-Frontiers, 2023.

Markdown

[Ghofrani et al. "Rethinking Robust Contrastive Learning from the Adversarial Perspective." ICML 2023 Workshops: AdvML-Frontiers, 2023.](https://mlanthology.org/icmlw/2023/ghofrani2023icmlw-rethinking/)

BibTeX

@inproceedings{ghofrani2023icmlw-rethinking,
  title     = {{Rethinking Robust Contrastive Learning from the Adversarial Perspective}},
  author    = {Ghofrani, Fatemeh and Yaghouti, Mehdi and Jamshidi, Pooyan},
  booktitle = {ICML 2023 Workshops: AdvML-Frontiers},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/ghofrani2023icmlw-rethinking/}
}