Is PRM Necessary? Problem-Solving RL Implicitly Induces PRM Capability in LLMs

Abstract

The development of reasoning capabilities represents a critical frontier in large language models (LLMs) research, where reinforcement learning (RL) and process reward models (PRMs) have emerged as predominant methodological frameworks. Contrary to conventional wisdom, empirical evidence from DeepSeek-R1 demonstrates that pure RL training focused on mathematical problem-solving can progressively enhance reasoning abilities without PRM integration, challenging the perceived necessity of process supervision. In this study, we conduct a systematic investigation of the relationship between RL training and PRM capabilities. Our findings demonstrate that problem-solving proficiency and process supervision capabilities represent complementary dimensions of reasoning that co-evolve synergistically during pure RL training. In particular, current PRMs underperform simple baselines like majority voting when applied to state-of-the-art models such as DeepSeek-R1 and QwQ-32B. To address this limitation, we propose Self-PRM, an introspective framework in which models autonomously evaluate and rerank their generated solutions through self-reward mechanisms. Although Self-PRM consistently improves the accuracy of the benchmark (particularly with larger sample sizes), analysis exposes persistent challenges: The approach exhibits low precision (<10\%) on difficult problems, frequently misclassifying flawed solutions as valid. These analyses underscore the need for combined training with process supervision and continued RL scaling to enhance reward alignment and introspective accuracy. We hope these findings provide actionable insights for building more reliable and self-aware complex reasoning models.

Cite

Text

Feng et al. "Is PRM Necessary? Problem-Solving RL Implicitly Induces PRM Capability in LLMs." Advances in Neural Information Processing Systems, 2025.

Markdown

[Feng et al. "Is PRM Necessary? Problem-Solving RL Implicitly Induces PRM Capability in LLMs." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/feng2025neurips-prm/)

BibTeX

@inproceedings{feng2025neurips-prm,
  title     = {{Is PRM Necessary? Problem-Solving RL Implicitly Induces PRM Capability in LLMs}},
  author    = {Feng, Zhangyin and Chen, Qianglong and Lu, Ning and Li, Yongqian and Cheng, Siqi and Peng, Shuangmu and Tang, Duyu and Liu, Shengcai and Zhang, Zhirui},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/feng2025neurips-prm/}
}