Localizing Before Answering: A Benchmark for Grounded Medical Visual Question Answering

Abstract

Medical Large Multi-modal Models (LMMs) have demonstrated remarkable capabilities in medical data interpretation. However, these models frequently generate hallucinations contradicting source evidence, particularly due to inadequate localization reasoning. This work reveals a critical limitation in current medical LMMs: instead of analyzing relevant pathological regions, they often rely on linguistic patterns or attend to irrelevant image areas when responding to disease-related queries. To address this, we introduce HEAL-MedVQA (Hallucination Evaluation via Localization MedVQA), a comprehensive benchmark designed to evaluate LMMs' localization abilities and hallucination robustness. HEAL-MedVQA features (i) two innovative evaluation protocols to assess visual and textual shortcut learning, and (ii) a dataset of 67K VQA pairs, with doctor-annotated anatomical segmentation masks for pathological regions. To improve visual reasoning, we propose the Localize-before-Answer (LobA) framework, which trains LMMs to localize target regions of interest and self-prompt to emphasize segmented pathological areas, generating grounded and reliable answers. Experimental results demonstrate that our approach significantly outperforms state-of-the-art biomedical LMMs on the challenging HEAL-MedVQA benchmark, advancing robustness in medical VQA.

Cite

Text

Nguyen et al. "Localizing Before Answering: A Benchmark for Grounded Medical Visual Question Answering." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/853

Markdown

[Nguyen et al. "Localizing Before Answering: A Benchmark for Grounded Medical Visual Question Answering." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/nguyen2025ijcai-localizing/) doi:10.24963/IJCAI.2025/853

BibTeX

@inproceedings{nguyen2025ijcai-localizing,
  title     = {{Localizing Before Answering: A Benchmark for Grounded Medical Visual Question Answering}},
  author    = {Nguyen, Dung and Ho, Minh Khoi and Ta, Huy D. and Nguyen, Thanh Tam and Chen, Qi and Rav, Kumar and Dang, Quy Duong and Ramchandre, Satwik and Phung, Son Lam and Liao, Zhibin and To, Minh-Son and Verjans, Johan and Le Nguyen, Phi and Phan, Vu Minh Hieu},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {7670-7678},
  doi       = {10.24963/IJCAI.2025/853},
  url       = {https://mlanthology.org/ijcai/2025/nguyen2025ijcai-localizing/}
}