Your Large Vision-Language Model Only Needs a Few Attention Heads for Visual Grounding

Abstract

Visual grounding seeks to localize the image region corresponding to a free-form text description. Recently, the strong multimodal capabilities of Large Vision-Language Models (LVLMs) have driven substantial improvements in visual grounding, though they inevitably require fine-tuning and additional model components to explicitly generate bounding boxes or segmentation masks. However, we discover that a few attention heads in frozen LVLMs demonstrate strong visual grounding capabilities. We refer to these heads, which consistently capture object locations related to text semantics, as localization heads. Using localization heads, we introduce a straightforward and effective training-free visual grounding framework that utilizes text-to-image attention maps from localization heads to identify the target objects. Surprisingly, only three out of thousands of attention heads are sufficient to achieve competitive localization performance compared to existing LVLM-based visual grounding methods that require fine-tuning. Our findings suggest that LVLMs can innately ground objects based on a deep comprehension of the text-image relationship, as they implicitly focus on relevant image regions to generate informative text outputs.

Cite

Text

Kang et al. "Your Large Vision-Language Model Only Needs a Few Attention Heads for Visual Grounding." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.00872

Markdown

[Kang et al. "Your Large Vision-Language Model Only Needs a Few Attention Heads for Visual Grounding." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/kang2025cvpr-your/) doi:10.1109/CVPR52734.2025.00872

BibTeX

@inproceedings{kang2025cvpr-your,
  title     = {{Your Large Vision-Language Model Only Needs a Few Attention Heads for Visual Grounding}},
  author    = {Kang, Seil and Kim, Jinyeong and Kim, Junhyeok and Hwang, Seong Jae},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {9339-9350},
  doi       = {10.1109/CVPR52734.2025.00872},
  url       = {https://mlanthology.org/cvpr/2025/kang2025cvpr-your/}
}