FFF: Fixing Flawed Foundations in Contrastive Pre-Training Results in Very Strong Vision-Language Models

Abstract

Despite noise and caption quality having been acknowledged as important factors impacting vision-language contrastive pre-training in this paper we show that the full potential of improving the training process by addressing such issues is yet to be realized. Specifically we firstly study and analyze two issues affecting training: incorrect assignment of negative pairs and low caption quality and diversity. Then we devise effective solutions for addressing both problems which essentially require training with multiple true positive pairs. Finally we propose training with sigmoid loss to address such a requirement. We show very large gains over the current state-of-the-art for both image recognition ( +6% on average over 11 datasets) and image retrieval ( +19% on Flickr30k and +15% on MSCOCO).

Cite

Text

Bulat et al. "FFF: Fixing Flawed Foundations in Contrastive Pre-Training Results in Very Strong Vision-Language Models." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01344

Markdown

[Bulat et al. "FFF: Fixing Flawed Foundations in Contrastive Pre-Training Results in Very Strong Vision-Language Models." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/bulat2024cvpr-fff/) doi:10.1109/CVPR52733.2024.01344

BibTeX

@inproceedings{bulat2024cvpr-fff,
  title     = {{FFF: Fixing Flawed Foundations in Contrastive Pre-Training Results in Very Strong Vision-Language Models}},
  author    = {Bulat, Adrian and Ouali, Yassine and Tzimiropoulos, Georgios},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {14172-14182},
  doi       = {10.1109/CVPR52733.2024.01344},
  url       = {https://mlanthology.org/cvpr/2024/bulat2024cvpr-fff/}
}