Adversarial Robustness of Discriminative Self-Supervised Learning in Vision

Abstract

Self-supervised learning (SSL) has advanced significantly in visual representation learning, yet comprehensive evaluations of its adversarial robustness remain limited. In this study, we evaluate the adversarial robustness of seven discriminative self-supervised models and one supervised model across diverse tasks, including ImageNet classification, transfer learning, segmentation, and detection. Our findings suggest that discriminative SSL models generally exhibit better robustness to adversarial attacks compared to their supervised counterpart on ImageNet, with this advantage extending to transfer learning when using linear evaluation. However, when fine-tuning is applied, the robustness gap between SSL and supervised models narrows considerably. Similarly, this robustness advantage diminishes in segmentation and detection tasks. We also investigate how various factors might influence adversarial robustness, including architectural choices, training duration, data augmentations, and batch sizes. Our analysis contributes to the ongoing exploration of adversarial robustness in visual self-supervised representation systems.

Cite

Text

Çağatan et al. "Adversarial Robustness of Discriminative Self-Supervised Learning in Vision." International Conference on Computer Vision, 2025.

Markdown

[Çağatan et al. "Adversarial Robustness of Discriminative Self-Supervised Learning in Vision." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/cagatan2025iccv-adversarial/)

BibTeX

@inproceedings{cagatan2025iccv-adversarial,
  title     = {{Adversarial Robustness of Discriminative Self-Supervised Learning in Vision}},
  author    = {Çağatan, Ömer Veysel and Tal, Ömer Faruk and Gursoy, M. Emre},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {2313-2324},
  url       = {https://mlanthology.org/iccv/2025/cagatan2025iccv-adversarial/}
}