Curriculum Learning for Data-Efficient Vision-Language Alignment

Abstract

Aligning image and text encoders from scratch using contrastive learning requires large amounts of paired image-text data. We alleviate this need by aligning individually pre-trained language and vision representation models using a much smaller amount of paired data with a curriculum learning algorithm to learn fine-grained vision-language alignments. TOnICS (Training with Ontology-Informed Contrastive Sampling) initially samples minibatches whose image-text pairs contain a wide variety of objects to learn object-level vision-language alignment, and progressively samples minibatches where all image-text pairs contain the same object to learn finer-grained contextual alignment. Aligning pre-trained BERT and VinVL-OD models to each other using TOnICS outperforms CLIP on downstream zero-shot image retrieval using < 1% as much training data.

Cite

Text

Srinivasan et al. "Curriculum Learning for Data-Efficient Vision-Language Alignment." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023. doi:10.1109/CVPRW59228.2023.00595

Markdown

[Srinivasan et al. "Curriculum Learning for Data-Efficient Vision-Language Alignment." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023.](https://mlanthology.org/cvprw/2023/srinivasan2023cvprw-curriculum/) doi:10.1109/CVPRW59228.2023.00595

BibTeX

@inproceedings{srinivasan2023cvprw-curriculum,
  title     = {{Curriculum Learning for Data-Efficient Vision-Language Alignment}},
  author    = {Srinivasan, Tejas and Ren, Xiang and Thomason, Jesse},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2023},
  pages     = {5619-5624},
  doi       = {10.1109/CVPRW59228.2023.00595},
  url       = {https://mlanthology.org/cvprw/2023/srinivasan2023cvprw-curriculum/}
}