AutoPRM: Automating Procedural Supervision for Multi-Step Reasoning via Controllable Question Decomposition
Abstract
Recent advancements in large language models (LLMs) have shown promise in multi-step reasoning tasks, yet their reliance on extensive manual labeling to provide procedural feedback remains a significant impediment. To address this challenge, in this paper, we propose a novel self-supervised framework AutoPRM that efficiently enhances the fine-tuning of LLMs for intricate reasoning challenges. Specifically, AutoPRM first decomposes complex problems into more manageable subquestions with a controllable granularity switch, then sequentially apply reinforcement learning to improve the subquestion solver iteratively. Additionally, we propose context-guided decoding to avoid reward tampering and guide the subquestion solver towards the solution of the holistic problem. Extensive experiments show that AutoPRM significantly improves performance on mathematical and commonsense reasoning tasks over SOTA. More encouragingly, AutoPRM can be easily integrated with other orthogonal reasoning pipelines.
Cite
Text
Chen et al. "AutoPRM: Automating Procedural Supervision for Multi-Step Reasoning via Controllable Question Decomposition." ICLR 2024 Workshops: R2-FM, 2024.Markdown
[Chen et al. "AutoPRM: Automating Procedural Supervision for Multi-Step Reasoning via Controllable Question Decomposition." ICLR 2024 Workshops: R2-FM, 2024.](https://mlanthology.org/iclrw/2024/chen2024iclrw-autoprm/)BibTeX
@inproceedings{chen2024iclrw-autoprm,
title = {{AutoPRM: Automating Procedural Supervision for Multi-Step Reasoning via Controllable Question Decomposition}},
author = {Chen, Zhaorun and Zhao, Zhuokai and Zhu, Zhihong and Zhang, Ruiqi and Li, Xiang and Raj, Bhiksha and Yao, Huaxiu},
booktitle = {ICLR 2024 Workshops: R2-FM},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/chen2024iclrw-autoprm/}
}