On Pretraining Data Diversity for Self-Supervised Learning
Abstract
We explore the impact of training with more diverse datasets, characterized by the number of unique samples, on the performance of self-supervised learning (SSL) under a fixed computational budget. Our findings demonstrate that increasing pretraining data diversity enhances SSL performance, albeit only when the distribution distance to the downstream data is minimal. Notably, even with an exceptionally large pretraining data diversity achieved through methods like web crawling or diffusion-generated data, among other ways, the distribution shift remains a challenge. Our experiments are comprehensive with seven SSL methods using large-scale datasets such as ImageNet and YFCC100M amounting to over 200 GPU days. The code and trained models will be available at https://github.com/hammoudhasan/DiversitySSL.
Cite
Text
Hammoud et al. "On Pretraining Data Diversity for Self-Supervised Learning." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72992-8_4Markdown
[Hammoud et al. "On Pretraining Data Diversity for Self-Supervised Learning." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/hammoud2024eccv-pretraining/) doi:10.1007/978-3-031-72992-8_4BibTeX
@inproceedings{hammoud2024eccv-pretraining,
title = {{On Pretraining Data Diversity for Self-Supervised Learning}},
author = {Hammoud, Hasan Abed Al Kader and Das, Tuhin and Pizzati, Fabio and Torr, Philip and Bibi, Adel and Ghanem, Bernard},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72992-8_4},
url = {https://mlanthology.org/eccv/2024/hammoud2024eccv-pretraining/}
}