Safe RLHF-V: Safe Reinforcement Learning from Multi-Modal Human Feedback

Abstract

Multimodal large language models (MLLMs) are essential for building general-purpose AI assistants; however, they pose increasing safety risks. How can we ensure safety alignment of MLLMs to prevent undesired behaviors? Going further, it is critical to explore how to fine-tune MLLMs to preserve capabilities while meeting safety constraints. Fundamentally, this challenge can be formulated as a min-max optimization problem. However, existing datasets have not yet disentangled single preference signals into explicit safety constraints, hindering systematic investigation in this direction. Moreover, it remains an open question whether such constraints can be effectively incorporated into the optimization process for multi-modal models. In this work, we present the first exploration of the Safe RLHF-V -- the first multimodal safety alignment framework. The framework consists of: (I) BeaverTails-V, the first open-source dataset featuring dual preference annotations for helpfulness and safety, supplemented with multi-level safety labels (minor, moderate, severe); (II) Beaver-Guard-V, a multi-level guardrail system to proactively defend against unsafe queries and adversarial attacks. Applying the guard model over five rounds of filtering and regeneration significantly enhances the precursor model’s overall safety by an average of 40.9%. (II) Based on dual preference, we initiate the first exploration of multi-modal safety alignment within a constrained optimization. Experimental results demonstrate that Safe RLHF effectively improves both model helpfulness and safety. Specifically, Safe RLHF-V enhances model safety by 34.2% and helpfulness by 34.3%.

Cite

Text

Ji et al. "Safe RLHF-V: Safe Reinforcement Learning from Multi-Modal Human Feedback." Advances in Neural Information Processing Systems, 2025.

Markdown

[Ji et al. "Safe RLHF-V: Safe Reinforcement Learning from Multi-Modal Human Feedback." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/ji2025neurips-safe/)

BibTeX

@inproceedings{ji2025neurips-safe,
  title     = {{Safe RLHF-V: Safe Reinforcement Learning from Multi-Modal Human Feedback}},
  author    = {Ji, Jiaming and Chen, Xinyu and Pan, Rui and Zhu, Han and Li, Jiahao and Hong, Donghai and Chen, Boyuan and Zhou, Jiayi and Wang, Kaile and Dai, Juntao and Chan, Chi-Min and Han, Sirui and Guo, Yike and Yang, Yaodong},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/ji2025neurips-safe/}
}