REST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search

Abstract

Recent methodologies in LLM self-training mostly rely on LLM generating responses and filtering those with correct output answers as training data. This approach often yields a low-quality fine-tuning training set (e.g., incorrect plans or intermediate reasoning). In this paper, we develop a reinforced self-training approach, called ReST-MCTS*, based on integrating process reward guidance with tree search MCTS* for collecting higher-quality reasoning traces as well as per-step value to train policy and reward models. ReST-MCTS* circumvents the per-step manual annotation typically used to train process rewards by tree-search-based reinforcement learning: Given oracle final correct answers, ReST-MCTS* is able to infer the correct process rewards by estimating the probability this step can help lead to the correct answer. These inferred rewards serve dual purposes: they act as value targets for further refining the process reward model and also facilitate the selection of high-quality traces for policy model self-training. We first show that the tree-search policy in ReST-MCTS* achieves higher accuracy compared with prior LLM reasoning baselines such as Best-of-N and Tree-of-Thought, within the same search budget. We then show that by using traces searched by this tree-search policy as training data, we can continuously enhance the three language models for multiple iterations, and outperform other self-training algorithms such as ReST$^\text{EM}$ and Self-Rewarding LM.

Cite

Text

Zhang et al. "REST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search." Neural Information Processing Systems, 2024. doi:10.52202/079017-2066

Markdown

[Zhang et al. "REST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/zhang2024neurips-restmcts/) doi:10.52202/079017-2066

BibTeX

@inproceedings{zhang2024neurips-restmcts,
  title     = {{REST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search}},
  author    = {Zhang, Dan and Zhoubian, Sining and Hu, Ziniu and Yue, Yisong and Dong, Yuxiao and Tang, Jie},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2066},
  url       = {https://mlanthology.org/neurips/2024/zhang2024neurips-restmcts/}
}