3D-Properties: Identifying Challenges in DPO and Charting a Path Forward

Abstract

Aligning large language models (LLMs) with human preferences has gained significant attention, with Proximal Policy Optimization (PPO) as a standard yet computationally expensive method and Direct Preference Optimization (DPO) as a more efficient alternative. While DPO offers simplicity, it remains underutilized in state-of-the-art LLMs, suggesting potential limitations. In this work, we revisit DPO, analyzing its theoretical foundations and empirical performance to bridge this gap. We identify three key properties—termed \textbf{3D}-properties—that emerge from DPO’s learning process: \textbf{D}rastic drop in rejected response likelihood, \textbf{D}egradation into response suppression, and \textbf{D}ispersion effect on unseen responses. We show that these issues arise from DPO’s optimization dynamics, where the interaction between chosen and rejected response gradients leads to instability. Our findings are supported by experiments on both a controlled toy model and real-world LLM tasks, including mathematical problem-solving and instruction following. To address these challenges, we propose simple regularization techniques that improve training stability and performance. Additionally, we examine how preference data distribution impacts DPO’s effectiveness, offering insights into how alignment models handle out-of-domain (OOD) data. Our work connects these observations to broader research and provides a theoretical explanation for DPO’s limitations. We hope these insights will guide future advancements in reward-model-free preference learning, bringing it closer to reward-model-based approaches.

Cite

Text

Yan et al. "3D-Properties: Identifying Challenges in DPO and Charting a Path Forward." International Conference on Learning Representations, 2025.

Markdown

[Yan et al. "3D-Properties: Identifying Challenges in DPO and Charting a Path Forward." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/yan2025iclr-3dproperties/)

BibTeX

@inproceedings{yan2025iclr-3dproperties,
  title     = {{3D-Properties: Identifying Challenges in DPO and Charting a Path Forward}},
  author    = {Yan, Yuzi and Miao, Yibo and Li, Jialian and YipinZhang,  and Xie, Jian and Deng, Zhijie and Yan, Dong},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/yan2025iclr-3dproperties/}
}