VLM’s Eye Examination: Instruct and Inspect Visual Competency of Vision Language Models
Abstract
Vision language models (VLMs) have shown promising reasoning capabilities across various benchmarks; however, our understanding of their visual perception remains limited. In this work, we propose an eye examination process to investigate how a VLM perceives images, focusing on key aspects of visual recognition, ranging from basic color and shape to semantic understanding. We introduce a dataset, LENS, to guide VLMs to follow the examination and check its readiness. Once the model is ready, we conduct the examination. We quantify and visualize VLMs' sensitivities to color and shape, and semantic matching. Our findings reveal that VLMs have varying sensitivity to different colors while consistently showing insensitivity to green across different VLMs. Also, we found different shape sensitivity and semantic recognition depending on LLM's capacity despite using the same fixed visual encoder. Our analyses and findings have the potential to inspire the design of VLMs and the pre-processing of visual input to VLMs for improving application performance.
Cite
Text
Hyeon-Woo et al. "VLM’s Eye Examination: Instruct and Inspect Visual Competency of Vision Language Models." Transactions on Machine Learning Research, 2025.Markdown
[Hyeon-Woo et al. "VLM’s Eye Examination: Instruct and Inspect Visual Competency of Vision Language Models." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/hyeonwoo2025tmlr-vlms/)BibTeX
@article{hyeonwoo2025tmlr-vlms,
title = {{VLM’s Eye Examination: Instruct and Inspect Visual Competency of Vision Language Models}},
author = {Hyeon-Woo, Nam and Ye-Bin, Moon and Choi, Wonseok and Hyun, Lee and Oh, Tae-Hyun},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/hyeonwoo2025tmlr-vlms/}
}