The SSL Interplay: Augmentations, Inductive Bias, and Generalization
Abstract
Self-supervised learning (SSL) has emerged as a powerful framework to learn representations from raw data without supervision. Yet in practice, engineers face issues such as instability in tuning optimizers and collapse of representations during training. Such challenges motivate the need for a theory to shed light on the complex interplay between the choice of data augmentation, network architecture, and training algorithm. % on the resulting performance in downstream tasks. We study such an interplay with a precise analysis of generalization performance on both pretraining and downstream tasks in kernel regimes, and highlight several insights for SSL practitioners that arise from our theory.
Cite
Text
Cabannes et al. "The SSL Interplay: Augmentations, Inductive Bias, and Generalization." International Conference on Machine Learning, 2023.Markdown
[Cabannes et al. "The SSL Interplay: Augmentations, Inductive Bias, and Generalization." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/cabannes2023icml-ssl/)BibTeX
@inproceedings{cabannes2023icml-ssl,
title = {{The SSL Interplay: Augmentations, Inductive Bias, and Generalization}},
author = {Cabannes, Vivien and Kiani, Bobak and Balestriero, Randall and Lecun, Yann and Bietti, Alberto},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {3252-3298},
volume = {202},
url = {https://mlanthology.org/icml/2023/cabannes2023icml-ssl/}
}