Divide and Contrast: Self-Supervised Learning from Uncurated Data
Abstract
Self-supervised learning holds promise in leveraging large amounts of unlabeled data, however much of its progress has thus far been limited to highly curated pre-training data such as ImageNet. We explore the effects of contrastive learning from larger, less-curated image datasets such as YFCC, and find there is indeed a large difference in the resulting representation quality. We hypothesize that this curation gap is due to a shift in the distribution of image classes---which is more diverse and heavy-tailed---resulting in less relevant negative samples to learn from. We test this hypothesis with a new approach, Divide and Contrast (DnC), which alternates between contrastive learning and clustering-based hard negative mining. When pretrained on less curated datasets, DnC greatly improves the performance of self-supervised learning on downstream tasks, while remaining competitive with the current state-of-the-art on curated datasets.
Cite
Text
Tian et al. "Divide and Contrast: Self-Supervised Learning from Uncurated Data." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00991Markdown
[Tian et al. "Divide and Contrast: Self-Supervised Learning from Uncurated Data." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/tian2021iccv-divide/) doi:10.1109/ICCV48922.2021.00991BibTeX
@inproceedings{tian2021iccv-divide,
title = {{Divide and Contrast: Self-Supervised Learning from Uncurated Data}},
author = {Tian, Yonglong and Hénaff, Olivier J. and van den Oord, Aäron},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {10063-10074},
doi = {10.1109/ICCV48922.2021.00991},
url = {https://mlanthology.org/iccv/2021/tian2021iccv-divide/}
}