MindOmni: Unleashing Reasoning Generation in Vision Language Models with RGPO

Abstract

Recent text-to-image systems face limitations in handling multimodal inputs and complex reasoning tasks. We introduce MindOmni, a unified multimodal large language model that addresses these challenges by incorporating reasoning generation through reinforcement learning. MindOmni leverages a three-phase training strategy: i) design of a unified vision language model with a decoder-only diffusion module, ii) supervised fine-tuning with Chain-of-Thought (CoT) instruction data, and iii) our proposed Reasoning Generation Policy Optimization (RGPO) algorithm, utilizing multimodal feedback to effectively guide policy updates. Experimental results demonstrate that MindOmni outperforms existing models, achieving impressive performance on both understanding and generation benchmarks, meanwhile showcasing advanced fine-grained reasoning generation capabilities, especially with mathematical reasoning instruction. All codes will be made public.

Cite

Text

Xiao et al. "MindOmni: Unleashing Reasoning Generation in Vision Language Models with RGPO." Advances in Neural Information Processing Systems, 2025.

Markdown

[Xiao et al. "MindOmni: Unleashing Reasoning Generation in Vision Language Models with RGPO." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/xiao2025neurips-mindomni/)

BibTeX

@inproceedings{xiao2025neurips-mindomni,
  title     = {{MindOmni: Unleashing Reasoning Generation in Vision Language Models with RGPO}},
  author    = {Xiao, Yicheng and Song, Lin and Chen, Yukang and Luo, Yingmin and Chen, Yuxin and Gan, Yukang and Huang, Wei and Li, Xiu and Qi, Xiaojuan and Shan, Ying},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/xiao2025neurips-mindomni/}
}