Robust Cross-Modal Representation Learning with Progressive Self-Distillation

Abstract

The learning objective of vision-language approach of CLIP does not effectively account for the noisy many-to-many correspondences found in web-harvested image captioning datasets, which contributes to its compute and data inefficiency. To address this challenge, we introduce a novel training framework based on cross-modal contrastive learning that uses progressive self-distillation and soft image-text alignments to more efficiently learn robust representations from noisy data. Our model distills its own knowledge to dynamically generate soft-alignment targets for a subset of images and captions in every minibatch, which are then used to update its parameters. Extensive evaluation across 14 benchmark datasets shows that our method consistently outperforms its CLIP counterpart in multiple settings, including: (a) zero-shot classification, (b) linear probe transfer, and (c) image-text retrieval, without incurring added computational cost. Analysis using an ImageNet-based robustness test-bedreveals that our method offers better effective robustness to natural distribution shifts compared to both ImageNet-trained models and CLIP itself. Lastly, pretraining with datasets spanning two orders of magnitude in size shows that our improvements over CLIP tend to scale with number of training examples.

Cite

Text

Andonian et al. "Robust Cross-Modal Representation Learning with Progressive Self-Distillation." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01594

Markdown

[Andonian et al. "Robust Cross-Modal Representation Learning with Progressive Self-Distillation." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/andonian2022cvpr-robust/) doi:10.1109/CVPR52688.2022.01594

BibTeX

@inproceedings{andonian2022cvpr-robust,
  title     = {{Robust Cross-Modal Representation Learning with Progressive Self-Distillation}},
  author    = {Andonian, Alex and Chen, Shixing and Hamid, Raffay},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {16430-16441},
  doi       = {10.1109/CVPR52688.2022.01594},
  url       = {https://mlanthology.org/cvpr/2022/andonian2022cvpr-robust/}
}