VLRMBench: A Comprehensive and Challenging Benchmark for Vision-Language Reward Models
Abstract
Although large visual-language models (LVLMs) have demonstrated strong performance in multimodal tasks, errors may occasionally arise due to biases during the reasoning process. Recently, reward models (RMs) have become increasingly pivotal in the reasoning process. Specifically, process RMs evaluate each reasoning step, outcome RMs focus on the assessment of reasoning results, and critique RMs perform error analysis on the entire reasoning process, followed by corrections. However, existing benchmarks for vision-language RMs (VLRMs) typically assess only a single aspect of their capabilities (e.g., distinguishing between two answers), thus limiting the all-round evaluation and restricting the development of RMs in the visual-language domain. To address this gap, we propose a comprehensive and challenging benchmark, dubbed as VLRMBench, encompassing 12,634 questions. VLRMBench is constructed based on three distinct types of datasets, covering mathematical reasoning, hallucination understanding, and multi-image understanding. We design 12 tasks across three major categories, focusing on evaluating VLRMs in the aspects of process understanding, outcome judgment, and critique generation. Extensive experiments are conducted on 21 open-source models and 5 advanced closed-source models, highlighting the challenges posed by VLRMBench. For instance, in the `Forecasting Future', a binary classification task, the advanced GPT-4o achieves only a 76.0% accuracy. The code is available at https://github.com/JCruan519/VLRMBench.
Cite
Text
Ruan et al. "VLRMBench: A Comprehensive and Challenging Benchmark for Vision-Language Reward Models." International Conference on Computer Vision, 2025.Markdown
[Ruan et al. "VLRMBench: A Comprehensive and Challenging Benchmark for Vision-Language Reward Models." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/ruan2025iccv-vlrmbench/)BibTeX
@inproceedings{ruan2025iccv-vlrmbench,
title = {{VLRMBench: A Comprehensive and Challenging Benchmark for Vision-Language Reward Models}},
author = {Ruan, Jiacheng and Yuan, Wenzhen and Gao, Xian and Guo, Ye and Zhang, Daoxin and Xu, Zhe and Hu, Yao and Liu, Ting and Fu, Yuzhuo},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {3163-3173},
url = {https://mlanthology.org/iccv/2025/ruan2025iccv-vlrmbench/}
}