Large Language Models Can Self-Correct with Minimal Effort
Abstract
Intrinsic self-correct was a method that instructed large language models (LLMs) to verify and correct their responses without external feedback. Unfortunately, the study concluded that the LLMs could not self-correct reasoning yet. We find that a simple yet effective verification method can unleash inherent capabilities of the LLMs. That is to mask a key condition in the question, add the current response to construct a verification question, and predict the condition to verify the response. The condition can be an entity in an open-domain question or a numeric value in a math question, which requires minimal effort (via prompting) to identify. We propose an iterative verify-then-correct framework to progressively identify and correct (probably) false responses, named ProCo. We conduct experiments on three reasoning tasks. On average, ProCo, with GPT-3.5-Turbo as the backend LLM, yields $+6.8$ exact match on four open-domain question answering datasets, $+14.1$ accuracy on three arithmetic reasoning datasets, and $+9.6$ accuracy on a commonsense reasoning dataset, compared to Self-Correct.
Cite
Text
Wu et al. "Large Language Models Can Self-Correct with Minimal Effort." ICML 2024 Workshops: AI4MATH, 2024.Markdown
[Wu et al. "Large Language Models Can Self-Correct with Minimal Effort." ICML 2024 Workshops: AI4MATH, 2024.](https://mlanthology.org/icmlw/2024/wu2024icmlw-large/)BibTeX
@inproceedings{wu2024icmlw-large,
title = {{Large Language Models Can Self-Correct with Minimal Effort}},
author = {Wu, Zhenyu and Zeng, Qingkai and Zhang, Zhihan and Tan, Zhaoxuan and Shen, Chao and Jiang, Meng},
booktitle = {ICML 2024 Workshops: AI4MATH},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/wu2024icmlw-large/}
}