VELOCITI: Benchmarking Video-Language Compositional Reasoning with Strict Entailment

Abstract

A fundamental aspect of compositional reasoning in a video is associating people and their actions across time. Recent years have seen great progress in general-purpose vision/video models and a move towards long-video understanding. While exciting, we take a step back and ask: are today's models good at compositional reasoning on short videos? To this end, we introduce VELOCITI, a benchmark to study Video-LLMs by disentangling and assessing the comprehension of agents, actions, and their associations across multiple events. We adopt the Video-Language Entailment setup and propose StrictVLE that requires correct classification (rather than ranking) of the positive and negative caption. We evaluate several models and observe that even the best, LLaVA-OneVision (44.5%) and Gemini-1.5-Pro (49.3%), are far from human accuracy at 93.0%. Results show that action understanding lags behind agents, and negative captions created using entities appearing in the video perform worse than those obtained from pure text manipulation. We also present challenges with ClassicVLE and multiple-choice (MC) evaluation, strengthening our preference for StrictVLE. Finally, we validate that our benchmark requires visual inputs of multiple frames making it ideal to study video-language compositional reasoning.

Cite

Text

Saravanan et al. "VELOCITI: Benchmarking Video-Language Compositional Reasoning with Strict Entailment." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.01762

Markdown

[Saravanan et al. "VELOCITI: Benchmarking Video-Language Compositional Reasoning with Strict Entailment." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/saravanan2025cvpr-velociti/) doi:10.1109/CVPR52734.2025.01762

BibTeX

@inproceedings{saravanan2025cvpr-velociti,
  title     = {{VELOCITI: Benchmarking Video-Language Compositional Reasoning with Strict Entailment}},
  author    = {Saravanan, Darshana and Gupta, Varun and Singh, Darshan and Khan, Zeeshan and Gandhi, Vineet and Tapaswi, Makarand},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {18914-18924},
  doi       = {10.1109/CVPR52734.2025.01762},
  url       = {https://mlanthology.org/cvpr/2025/saravanan2025cvpr-velociti/}
}