Mitigating Hallucination Through Theory-Consistent Symmetric Multimodal Preference Optimization

Abstract

Direct Preference Optimization (DPO) has emerged as an effective approach for mitigating hallucination in Multimodal Large Language Models (MLLMs). Although existing methods have achieved significant progress by utilizing vision-oriented contrastive objectives for enhancing MLLMs' attention to visual inputs and hence reducing hallucination, they suffer from non-rigorous optimization objective function and indirect preference supervision. To address these limitations, we propose a Symmetric Multimodal Preference Optimization (SymMPO), which conducts symmetric preference learning with direct preference supervision (i.e., response pairs) for visual understanding enhancement, while maintaining rigorous theoretical alignment with standard DPO. In addition to conventional ordinal preference learning, SymMPO introduces a preference margin consistency loss to quantitatively regulate the preference gap between symmetric preference pairs. Comprehensive evaluation across five benchmarks demonstrate SymMPO's superior performance, validating its effectiveness in hallucination mitigation of MLLMs.

Cite

Text

Liu et al. "Mitigating Hallucination Through Theory-Consistent Symmetric Multimodal Preference Optimization." Advances in Neural Information Processing Systems, 2025.

Markdown

[Liu et al. "Mitigating Hallucination Through Theory-Consistent Symmetric Multimodal Preference Optimization." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/liu2025neurips-mitigating/)

BibTeX

@inproceedings{liu2025neurips-mitigating,
  title     = {{Mitigating Hallucination Through Theory-Consistent Symmetric Multimodal Preference Optimization}},
  author    = {Liu, Wenqi and Song, Xuemeng and Li, Jiaxi and Wei, Yinwei and Zheng, Na and Yin, Jianhua and Nie, Liqiang},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/liu2025neurips-mitigating/}
}