Guillotine Regularization: Why Removing Layers Is Needed to Improve Generalization in Self-Supervised Learning
Abstract
One unexpected technique that emerged in recent years consists in training a Deep Network (DN) with a Self-Supervised Learning (SSL) method, and using this network on downstream tasks but with its last few layers entirely removed. This usually skimmed-over trick of throwing away the entire projector is actually critical for SSL methods to display competitive performances. For example, on ImageNet classification, more than 30 points of percentage can be gained that way. This is a little vexing, as one would hope that the network layer at which invariance is explicitly enforced by the SSL criterion during training (the last layer) should be the one to use for best generalization performance downstream. But it seems not to be, and this study sheds some light on why. This trick, which we name Guillotine Regularization (GR), is in fact a generically applicable method that has been used to improve generalization performance in transfer learning scenarios. In this work, we identify the underlying reasons behind its success and challenge the preconceived idea that we should through away the entire projector in SSL. In fact, the optimal layer to use might change significantly depending on the training setup, the data or the downstream task. Lastly, we give some insights on how to reduce the need for a projector in SSL by aligning the pretext SSL task and the downstream task.
Cite
Text
Bordes et al. "Guillotine Regularization: Why Removing Layers Is Needed to Improve Generalization in Self-Supervised Learning." Transactions on Machine Learning Research, 2023.Markdown
[Bordes et al. "Guillotine Regularization: Why Removing Layers Is Needed to Improve Generalization in Self-Supervised Learning." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/bordes2023tmlr-guillotine/)BibTeX
@article{bordes2023tmlr-guillotine,
title = {{Guillotine Regularization: Why Removing Layers Is Needed to Improve Generalization in Self-Supervised Learning}},
author = {Bordes, Florian and Balestriero, Randall and Garrido, Quentin and Bardes, Adrien and Vincent, Pascal},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/bordes2023tmlr-guillotine/}
}