Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-Training Paradigm

Abstract

Recently, large-scale Contrastive Language-Image Pre-training (CLIP) has attracted unprecedented attention for its impressive zero-shot recognition ability and excellent transferability to downstream tasks. However, CLIP is quite data-hungry and requires 400M image-text pairs for pre-training, thereby restricting its adoption. This work proposes a novel training paradigm, Data efficient CLIP (DeCLIP), to alleviate this limitation. We demonstrate that by carefully utilizing the widespread supervision among the image-text pairs, our De-CLIP can learn generic visual features more efficiently. Instead of using the single image-text contrastive supervision, we fully exploit data potential through the use of (1) self-supervision within each modality; (2) multi-view supervision across modalities; (3) nearest-neighbor supervision from other similar pairs. Benefiting from intrinsic supervision, our DeCLIP-ResNet50 can achieve 60.4% zero-shot top1 accuracy on ImageNet, which is 0.8% above the CLIP-ResNet50 while using 7.1×fewer data. Our DeCLIP-ResNet50 outperforms its counterpart in 8 out of 11 visual datasets when transferred to downstream tasks. Moreover, Scaling up the model and computing also works well in our framework.

Cite

Text

Li et al. "Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image  Pre-Training Paradigm." International Conference on Learning Representations, 2022.

Markdown

[Li et al. "Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image  Pre-Training Paradigm." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/li2022iclr-supervision/)

BibTeX

@inproceedings{li2022iclr-supervision,
  title     = {{Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image  Pre-Training Paradigm}},
  author    = {Li, Yangguang and Liang, Feng and Zhao, Lichen and Cui, Yufeng and Ouyang, Wanli and Shao, Jing and Yu, Fengwei and Yan, Junjie},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/li2022iclr-supervision/}
}