V?: Guided Visual Search as a Core Mechanism in Multimodal LLMs

Abstract

When we look around and perform complex tasks how we see and selectively process what we see is crucial. However the lack of this visual search mechanism in current multimodal LLMs (MLLMs) hinders their ability to focus on important visual details especially when handling high-resolution and visually crowded images. To address this we introduce V* an LLM-guided visual search mechanism that employs the world knowledge in LLMs for efficient visual querying. When combined with an MLLM this mechanism enhances collaborative reasoning contextual understanding and precise visual grounding. This integration results in a new MLLM meta-architecture named Show sEArch and TelL (SEAL). We further create V*Bench a benchmark specifically designed to evaluate MLLMs in their ability to process high-resolution images and focus on visual details. Our study highlights the necessity of incorporating visual search capabilities into multimodal systems. The code is available at https://github.com/penghao-wu/vstar

Cite

Text

Wu and Xie. "V?: Guided Visual Search as a Core Mechanism in Multimodal LLMs." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01243

Markdown

[Wu and Xie. "V?: Guided Visual Search as a Core Mechanism in Multimodal LLMs." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/wu2024cvpr-guided/) doi:10.1109/CVPR52733.2024.01243

BibTeX

@inproceedings{wu2024cvpr-guided,
  title     = {{V?: Guided Visual Search as a Core Mechanism in Multimodal LLMs}},
  author    = {Wu, Penghao and Xie, Saining},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {13084-13094},
  doi       = {10.1109/CVPR52733.2024.01243},
  url       = {https://mlanthology.org/cvpr/2024/wu2024cvpr-guided/}
}