MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge?
Abstract
Multimodal reward models (RMs) are critical in RLHF and RLAIF, where they serve as judges in aligning foundation models (FMs) with desired behaviors. Despite their significance, these multimodal judges often undergo inadequate evaluation of their capabilities and biases, which may lead to potential misalignment and unsafe fine-tuning outcomes. To address this issue, we introduce MJ-Bench, a novel benchmark which incorporates a comprehensive preference dataset to evaluate multimodal judges in providing feedback for image generation models across four key perspectives: alignment, safety, image quality, and bias. Specifically, we evaluate a large variety of multimodal judges including smaller-sized CLIP-based scoring models, open-source VLMs (e.g. LLaVA family), and close-source VLMs (e.g. GPT-4o, Claude 3) on each decomposed subcategories of our preference dataset. Experiments reveal that close-source VLMs generally provide better feedback, with GPT-4o outperforming other judges in average. Compared with open-source VLMs, smaller-sized scoring models can provide better feedback regarding text-image alignment and image quality, while VLMs provide more accurate feedback regarding safety and generation bias thanks to their stronger reasoning capabilities. Further studies in feedback scale reveal that VLM judges can generally provide more accurate and stable feedback in natural language (Likert-scale) than numerical scales. We hope that our benchmark can help ease future alignment research and provide a better guidance in using these multimodal judges.
Cite
Text
Chen et al. "MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge?." ICML 2024 Workshops: FM-Wild, 2024.Markdown
[Chen et al. "MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge?." ICML 2024 Workshops: FM-Wild, 2024.](https://mlanthology.org/icmlw/2024/chen2024icmlw-mjbench/)BibTeX
@inproceedings{chen2024icmlw-mjbench,
title = {{MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge?}},
author = {Chen, Zhaorun and Du, Yichao and Wen, Zichen and Zhou, Yiyang and Cui, Chenhang and Weng, Zhenzhen and Tu, Haoqin and Wang, Chaoqi and Tong, Zhengwei and Huang, Leria and Chen, Canyu and Ye, Qinghao and Zhu, Zhihong and Zhang, Yuqing and Zhou, Jiawei and Zhao, Zhuokai and Rafailov, Rafael and Finn, Chelsea and Yao, Huaxiu},
booktitle = {ICML 2024 Workshops: FM-Wild},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/chen2024icmlw-mjbench/}
}