Incentivizing LLMs to Self-Verify Their Answers
Abstract
Large Language Models (LLMs) have demonstrated remarkable progress in complex reasoning tasks through both post-training and test-time scaling laws. While prevalent test-time scaling approaches are often realized by using external reward models to guide the model generation process, we find that only marginal gains can be acquired when scaling a model post-trained on specific reasoning tasks. We identify that the limited improvement stems from distribution discrepancies between the specific post-trained generator and the general reward model. To address this, we propose a framework that incentivizes LLMs to self-verify their own answers. By unifying answer generation and verification within a single reinforcement learning (RL) process, we train models that can effectively assess the correctness of their own solutions. The trained model can further scale its performance at inference time by verifying its generations, without the need for external verifiers. We train our self-verification models based on Qwen2.5-Math-7B and DeepSeek-R1-Distill-Qwen-1.5B, demonstrating their capabilities across varying reasoning context lengths. Experiments on multiple mathematical reasoning benchmarks show that our models can not only improve post-training performance but also enable effective test-time scaling. Our code is available at https://github.com/mansicer/self-verification.
Cite
Text
Zhang et al. "Incentivizing LLMs to Self-Verify Their Answers." Advances in Neural Information Processing Systems, 2025.Markdown
[Zhang et al. "Incentivizing LLMs to Self-Verify Their Answers." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhang2025neurips-incentivizing/)BibTeX
@inproceedings{zhang2025neurips-incentivizing,
title = {{Incentivizing LLMs to Self-Verify Their Answers}},
author = {Zhang, Fuxiang and Xu, Jiacheng and Wang, Chaojie and Cui, Ce and Liu, Yang and An, Bo},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/zhang2025neurips-incentivizing/}
}