Can Vision Language Models Learn from Visual Demonstrations of Ambiguous Spatial Reasoning?

Abstract

Large vision-language models (VLMs) have become state-of-the-art for many computer vision tasks, with in-context learning (ICL) as a popular adaptation strategy for new ones. But can VLMs learn novel concepts from visual demonstrations with ambiguous text queries, or are they limited to adapting to the output format of ICL examples? We propose a new benchmark we call Spatial Visual Ambiguity Tasks (SVAT) that challenges state-of-the-art VLMs to learn new visuospatial tasks in-context. We find that VLMs fail to do this zero-shot, and sometimes continue to fail after finetuning. However, adding simpler data to the training by curriculum learning leads to improved ICL performance. We release our benchmark generation, training, and evaluation code to facilitate future research.

Cite

Text

Zhao et al. "Can Vision Language Models Learn from Visual Demonstrations of Ambiguous Spatial Reasoning?." NeurIPS 2024 Workshops: AFM, 2024.

Markdown

[Zhao et al. "Can Vision Language Models Learn from Visual Demonstrations of Ambiguous Spatial Reasoning?." NeurIPS 2024 Workshops: AFM, 2024.](https://mlanthology.org/neuripsw/2024/zhao2024neuripsw-vision/)

BibTeX

@inproceedings{zhao2024neuripsw-vision,
  title     = {{Can Vision Language Models Learn from Visual Demonstrations of Ambiguous Spatial Reasoning?}},
  author    = {Zhao, Bowen and Dirac, Leo Parker and Varshavskaya, Paulina},
  booktitle = {NeurIPS 2024 Workshops: AFM},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/zhao2024neuripsw-vision/}
}