Step-Controlled DPO: Leveraging Stepwise Errors for Enhancing Mathematical Reasoning of Language Models

Abstract

Direct Preference Optimization (DPO) has proven effective at improving the performance of large language models (LLMs) on downstream tasks such as reasoning and alignment. In this work, we propose Step-Controlled DPO (SCDPO), a method for automatically providing stepwise error supervision by creating negative samples of mathematical reasoning rationales that start making errors at a specified step. By applying these samples in DPO training, SCDPO can better align the model to avoid reasoning errors and output accurate reasoning steps. Qualitative analysis of the credit assignment of SCDPO and DPO demonstrates the effectiveness of SCDPO at identifying errors in mathematical solutions. We then apply SCDPO to an InternLM2-20B model, resulting in a 20B model that achieves competitive scores of 88.5% on GSM8K and 58.1% on MATH, rivaling all other open-source LLMs, showing the great potential of our method. The code, models and data are released to inspire future work.

Cite

Text

Lu et al. "Step-Controlled DPO: Leveraging Stepwise Errors for Enhancing Mathematical Reasoning of Language Models." Transactions on Machine Learning Research, 2025.

Markdown

[Lu et al. "Step-Controlled DPO: Leveraging Stepwise Errors for Enhancing Mathematical Reasoning of Language Models." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/lu2025tmlr-stepcontrolled/)

BibTeX

@article{lu2025tmlr-stepcontrolled,
  title     = {{Step-Controlled DPO: Leveraging Stepwise Errors for Enhancing Mathematical Reasoning of Language Models}},
  author    = {Lu, Zimu and Zhou, Aojun and Wang, Ke and Ren, Houxing and Shi, Weikang and Yang, Yunqiao and Pan, Junting and Zhan, Mingjie and Li, Hongsheng},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/lu2025tmlr-stepcontrolled/}
}