AVAM: A Universal Training-Free Adaptive Visual Anchoring Embedded into Multimodal Large Language Model for Multi-Image Question Answering

Abstract

The advancement of Multimodal Large Language Models (MLLMs) has driven significant progress in Visual Question Answering (VQA), evolving from Single to Multi Image VQA (MVQA). However, the increased number of images in MVQA inevitably introduces substantial visual redundancy that is irrelevant to question answering, negatively impacting both accuracy and efficiency. To address this issue, existing methods lack flexibility in controlling the number of compressed visual tokens and tend to produce discrete visual fragments, which hinder MLLMs' ability to comprehend images holistically. In this paper, we propose a straightforward yet universal Adaptive Visual Anchoring strategy, which can be seamlessly integrated into existing MLLMs, offering significant accuracy improvements through adaptive compression. Meanwhile, to balance the results derived from both global and compressed visual input, we further introduce a novel collaborative decoding mechanism, enabling optimal performance. Extensive experiments validate the effectiveness of our method, demonstrating consistent performance improvements across various MLLMs. The code will be publicly available.

Cite

Text

Zeng et al. "AVAM: A Universal Training-Free Adaptive Visual Anchoring Embedded into Multimodal Large Language Model for Multi-Image Question Answering." International Conference on Computer Vision, 2025.

Markdown

[Zeng et al. "AVAM: A Universal Training-Free Adaptive Visual Anchoring Embedded into Multimodal Large Language Model for Multi-Image Question Answering." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/zeng2025iccv-avam/)

BibTeX

@inproceedings{zeng2025iccv-avam,
  title     = {{AVAM: A Universal Training-Free Adaptive Visual Anchoring Embedded into Multimodal Large Language Model for Multi-Image Question Answering}},
  author    = {Zeng, Kang and Zhong, Guojin and Cheng, Jintao and Yuan, Jin and Li, Zhiyong},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {2292-2302},
  url       = {https://mlanthology.org/iccv/2025/zeng2025iccv-avam/}
}