Uncertainty-Aware Step-Wise Verification with Generative Reward Models

Abstract

Complex multi-step reasoning tasks, such as solving mathematical problems, remain challenging for large language models (LLMs). While outcome supervision is commonly used, process supervision via process reward models (PRMs) provides intermediate rewards to verify step-wise correctness in solution traces. However, as proxies for human judgment, PRMs suffer from reliability issues, including susceptibility to reward hacking. In this work, we propose leveraging uncertainty quantification (UQ) to enhance the reliability of step-wise verification with generative reward models for mathematical reasoning tasks. We introduce CoT Entropy, a novel UQ method that outperforms existing approaches in quantifying a PRM's uncertainty in step-wise verification. Our results demonstrate that incorporating uncertainty estimates improves the robustness of judge-LM PRMs, leading to more reliable verification.

Cite

Text

Ye et al. "Uncertainty-Aware Step-Wise Verification with Generative Reward Models." ICLR 2025 Workshops: QUESTION, 2025.

Markdown

[Ye et al. "Uncertainty-Aware Step-Wise Verification with Generative Reward Models." ICLR 2025 Workshops: QUESTION, 2025.](https://mlanthology.org/iclrw/2025/ye2025iclrw-uncertaintyaware/)

BibTeX

@inproceedings{ye2025iclrw-uncertaintyaware,
  title     = {{Uncertainty-Aware Step-Wise Verification with Generative Reward Models}},
  author    = {Ye, Zihuiwen and Melo, Luckeciano Carvalho and Kaddar, Younesse and Blunsom, Phil and Staton, Sam and Gal, Yarin},
  booktitle = {ICLR 2025 Workshops: QUESTION},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/ye2025iclrw-uncertaintyaware/}
}