Simplicial Embeddings in Self-Supervised Learning and Downstream Classification
Abstract
Simplicial Embeddings (SEM) are representations learned through self-supervised learning (SSL), wherein a representation is projected into $L$ simplices of $V$ dimensions each using a \texttt{softmax} operation. This procedure conditions the representation onto a constrained space during pretraining and imparts an inductive bias for group sparsity. For downstream classification, we formally prove that the SEM representation leads to better generalization than an unnormalized representation. Furthermore, we empirically demonstrate that SSL methods trained with SEMs have improved generalization on natural image datasets such as CIFAR-100 and ImageNet. Finally, when used in a downstream classification task, we show that SEM features exhibit emergent semantic coherence where small groups of learned features are distinctly predictive of semantically-relevant classes.
Cite
Text
Lavoie et al. "Simplicial Embeddings in Self-Supervised Learning and Downstream Classification." International Conference on Learning Representations, 2023.Markdown
[Lavoie et al. "Simplicial Embeddings in Self-Supervised Learning and Downstream Classification." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/lavoie2023iclr-simplicial/)BibTeX
@inproceedings{lavoie2023iclr-simplicial,
title = {{Simplicial Embeddings in Self-Supervised Learning and Downstream Classification}},
author = {Lavoie, Samuel and Tsirigotis, Christos and Schwarzer, Max and Vani, Ankit and Noukhovitch, Michael and Kawaguchi, Kenji and Courville, Aaron},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/lavoie2023iclr-simplicial/}
}