Vision-Language Models Do Not Understand Negation

Abstract

Many practical vision-language applications require models that understand negation, e.g., when using natural language to retrieve images which contain certain objects but not others. Despite advancements in vision-language models (VLMs) through large-scale training, their ability to comprehend negation remains underexplored. This study addresses the question: how well do current VLMs understand negation? We introduce \benchmark , a new benchmark designed to evaluate negation understanding across 18 task variations and 79k examples spanning image, video, and medical datasets. The benchmark consists of two core tasks designed to evaluate negation understanding in diverse multimodal settings: Retrieval with Negation and Multiple Choice Questions with Negated Captions. Our evaluation reveals that modern VLMs struggle significantly with negation, often performing at chance level. To address these shortcomings, we explore a data-centric approach wherein we finetune CLIP models on large-scale synthetic datasets containing millions of negated captions. We show that this approach can result in a 10% increase in recall on negated queries and a 28% boost in accuracy on multiple-choice questions with negated captions.

Cite

Text

Alhamoud et al. "Vision-Language Models Do Not Understand Negation." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.02757

Markdown

[Alhamoud et al. "Vision-Language Models Do Not Understand Negation." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/alhamoud2025cvpr-visionlanguage/) doi:10.1109/CVPR52734.2025.02757

BibTeX

@inproceedings{alhamoud2025cvpr-visionlanguage,
  title     = {{Vision-Language Models Do Not Understand Negation}},
  author    = {Alhamoud, Kumail and Alshammari, Shaden and Tian, Yonglong and Li, Guohao and Torr, Philip H.S. and Kim, Yoon and Ghassemi, Marzyeh},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {29612-29622},
  doi       = {10.1109/CVPR52734.2025.02757},
  url       = {https://mlanthology.org/cvpr/2025/alhamoud2025cvpr-visionlanguage/}
}