The Close Relationship Between Contrastive Learning and Meta-Learning
Abstract
Contrastive learning has recently taken off as a paradigm for learning from unlabeled data. In this paper, we discuss the close relationship between contrastive learning and meta-learning under a certain task distribution. We complement this observation by showing that established meta-learning methods, such as Prototypical Networks, achieve comparable performance to SimCLR when paired with this task distribution. This relationship can be leveraged by taking established techniques from meta-learning, such as task-based data augmentation, and showing that they benefit contrastive learning as well. These tricks also benefit state-of-the-art self-supervised learners without using negative pairs such as BYOL, which achieves 94.6\% accuracy on CIFAR-10 using a self-supervised ResNet-18 feature extractor trained with our meta-learning tricks. We conclude that existing advances designed for contrastive learning or meta-learning can be exploited to benefit the other, and it is better for contrastive learning researchers to take lessons from the meta-learning literature (and vice-versa) than to reinvent the wheel.
Cite
Text
Ni et al. "The Close Relationship Between Contrastive Learning and Meta-Learning." International Conference on Learning Representations, 2022.Markdown
[Ni et al. "The Close Relationship Between Contrastive Learning and Meta-Learning." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/ni2022iclr-close/)BibTeX
@inproceedings{ni2022iclr-close,
title = {{The Close Relationship Between Contrastive Learning and Meta-Learning}},
author = {Ni, Renkun and Shu, Manli and Souri, Hossein and Goldblum, Micah and Goldstein, Tom},
booktitle = {International Conference on Learning Representations},
year = {2022},
url = {https://mlanthology.org/iclr/2022/ni2022iclr-close/}
}