ReasonVQA: A Multi-Hop Reasoning Benchmark with Structural Knowledge for Visual Question Answering

Abstract

In this paper, we propose a new dataset, ReasonVQA, for the Visual Question Answering (VQA) task. Our dataset is automatically integrated with structured encyclopedic knowledge and constructed using a low-cost framework, which is capable of generating complex, multi-hop questions. We evaluated state-of-the-art VQA models on ReasonVQA, and the empirical results demonstrate that ReasonVQA poses significant challenges to these models, highlighting its potential for benchmarking and advancing the field of VQA. Additionally, our dataset can be easily scaled with respect to input images; the current version surpasses the largest existing datasets requiring external knowledge by more than an order of magnitude.

Cite

Text

Tran et al. "ReasonVQA: A Multi-Hop Reasoning Benchmark with Structural Knowledge for Visual Question Answering." International Conference on Computer Vision, 2025.

Markdown

[Tran et al. "ReasonVQA: A Multi-Hop Reasoning Benchmark with Structural Knowledge for Visual Question Answering." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/tran2025iccv-reasonvqa/)

BibTeX

@inproceedings{tran2025iccv-reasonvqa,
  title     = {{ReasonVQA: A Multi-Hop Reasoning Benchmark with Structural Knowledge for Visual Question Answering}},
  author    = {Tran, Duong T. and Tran, Trung-Kien and Hauswirth, Manfred and Le Phuoc, Danh},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {18793-18803},
  url       = {https://mlanthology.org/iccv/2025/tran2025iccv-reasonvqa/}
}