The Hard Positive Truth About Vision-Language Compositionality
Abstract
Several benchmarks have concluded that our best vision-language models (, CLIP) are lacking in compositionality. Given an image, these benchmarks probe a model’s ability to identify its associated caption amongst a set of compositional distractors. In response, a surge of recent proposals show improvements by finetuning CLIP with distractors as hard negatives. Our investigations reveal that these improvements have, in fact, been overstated — because existing benchmarks do not probe whether finetuned models remain invariant to hard positives. By curating an evaluation dataset with 112, 382 hard negatives and hard positives, we uncover that including hard positives decreases CLIP’s performance by 12.9%, while humans perform effortlessly at 99%. CLIP finetuned with hard negatives results in an even larger decrease, up to 38.7%. With this finding, we then produce a 1,775,259 image-text training set with both hard negative and hard positive captions. By training with both, we see improvements on existing benchmarks while simultaneously improving performance on hard positives, indicating a more robust improvement in compositionality. Our work suggests the need for future research to rigorously test and improve CLIP’s understanding of semantic relationships between related “positive” concepts.
Cite
Text
Kamath et al. "The Hard Positive Truth About Vision-Language Compositionality." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72630-9_3Markdown
[Kamath et al. "The Hard Positive Truth About Vision-Language Compositionality." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/kamath2024eccv-hard/) doi:10.1007/978-3-031-72630-9_3BibTeX
@inproceedings{kamath2024eccv-hard,
title = {{The Hard Positive Truth About Vision-Language Compositionality}},
author = {Kamath, Amita and Hsieh, Cheng-Yu and Chang, Kai-Wei and Krishna, Ranjay},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72630-9_3},
url = {https://mlanthology.org/eccv/2024/kamath2024eccv-hard/}
}