RePIC: Reinforced Post-Training for Personalizing Multi-Modal Language Models
Abstract
Recent multi-modal large language models (MLLMs) often struggle to generate personalized image captions, even when trained on high-quality captions. In this work, we observe that such limitations persist in existing post-training-based MLLM personalization methods. Specifically, despite being post-tuned with large-scale caption data through supervised fine-tuning (SFT), these models frequently fail to produce faithful descriptions in real-world scenarios, such as multi-concept image captioning. However, acquiring large-scale, high-quality captions for such complex settings is both costly and difficult. To address the data-centric nature of SFT, we propose a reinforcement learning (RL)-based post-training framework. To the best of our knowledge, this is the first RL-based approach to post-train MLLMs for personalized image captioning. Our method significantly enhances both visual recognition and personalized generation capabilities of MLLMs, and consistently outperforms existing SFT-based baselines, especially in the challenging multi-concept image captioning task. Project page: https://github.com/oyt9306/RePIC
Cite
Text
Oh et al. "RePIC: Reinforced Post-Training for Personalizing Multi-Modal Language Models." Advances in Neural Information Processing Systems, 2025.Markdown
[Oh et al. "RePIC: Reinforced Post-Training for Personalizing Multi-Modal Language Models." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/oh2025neurips-repic/)BibTeX
@inproceedings{oh2025neurips-repic,
title = {{RePIC: Reinforced Post-Training for Personalizing Multi-Modal Language Models}},
author = {Oh, Yeongtak and Chung, Dohyun and Shin, Juhyeon and Park, Sangha and Barthelemy, Johan and Mok, Jisoo and Yoon, Sungroh},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/oh2025neurips-repic/}
}