What Do We Maximize in Self-Supervised Learning?

Abstract

In this paper, we examine self-supervised learning methods, particularly VICReg, to provide an information-theoretical understanding of their construction. As a first step, we demonstrate how information-theoretic quantities can be obtained for a deterministic network, offering a possible alternative to prior work that relies on stochastic models. This enables us to demonstrate how VICReg can be (re)discovered from first principles and its assumptions about data distribution. Furthermore, we empirically demonstrate the validity of our assumptions, confirming our novel understanding of VICReg. Finally, we believe that the derivation and insights we obtain can be generalized to many other SSL methods, opening new avenues for theoretical and practical understanding of SSL and transfer learning.

Cite

Text

Shwartz-Ziv et al. "What Do We Maximize in Self-Supervised Learning?." ICML 2022 Workshops: Pre-Training, 2022.

Markdown

[Shwartz-Ziv et al. "What Do We Maximize in Self-Supervised Learning?." ICML 2022 Workshops: Pre-Training, 2022.](https://mlanthology.org/icmlw/2022/shwartzziv2022icmlw-we/)

BibTeX

@inproceedings{shwartzziv2022icmlw-we,
  title     = {{What Do We Maximize in Self-Supervised Learning?}},
  author    = {Shwartz-Ziv, Ravid and Balestriero, Randall and LeCun, Yann},
  booktitle = {ICML 2022 Workshops: Pre-Training},
  year      = {2022},
  url       = {https://mlanthology.org/icmlw/2022/shwartzziv2022icmlw-we/}
}