RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data

Abstract

Large vision-language models (LVLMs) often fail to align with human preferences, leading to issues like generating misleading content without proper visual context (also known as hallucination). A promising solution to this problem is using human-preference alignment techniques, such as best-of-n sampling and reinforcement learning. However, these techniques face the difficulty arising from the scarcity of visual preference data, which is required to train a visual reward model (VRM). In this work, we continue the line of research. We present a Robust Visual Reward Model (RoVRM) which improves human-preference alignment for LVLMs. RoVRM leverages auxiliary textual preference data through a three-phase progressive training and optimal transport-based preference data selection to effectively mitigate the scarcity of visual preference data. We experiment with RoVRM on the commonly used vision-language tasks based on the LLaVA-1.5-7B and -13B models. Experimental results demonstrate that RoVRM consistently outperforms traditional VRMs. Furthermore, our three-phase progressive training and preference data selection approaches can yield consistent performance gains over ranking-based alignment techniques, such as direct preference optimization.

Cite

Text

Wang et al. "RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I24.34721

Markdown

[Wang et al. "RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/wang2025aaai-rovrm/) doi:10.1609/AAAI.V39I24.34721

BibTeX

@inproceedings{wang2025aaai-rovrm,
  title     = {{RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data}},
  author    = {Wang, Chenglong and Gan, Yang and Huo, Yifu and Mu, Yongyu and Yang, Murun and He, Qiaozhi and Xiao, Tong and Zhang, Chunliang and Liu, Tongran and Zhu, Jingbo},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {25336-25344},
  doi       = {10.1609/AAAI.V39I24.34721},
  url       = {https://mlanthology.org/aaai/2025/wang2025aaai-rovrm/}
}