Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

Abstract

We explore how generating a chain of thought---a series of intermediate reasoning steps---significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier.

Cite

Text

Wei et al. "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models." Neural Information Processing Systems, 2022.

Markdown

[Wei et al. "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/wei2022neurips-chainofthought/)

BibTeX

@inproceedings{wei2022neurips-chainofthought,
  title     = {{Chain-of-Thought Prompting Elicits Reasoning in Large Language Models}},
  author    = {Wei, Jason and Wang, Xuezhi and Schuurmans, Dale and Bosma, Maarten and Ichter, Brian and Xia, Fei and Chi, Ed and Le, Quoc V and Zhou, Denny},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/wei2022neurips-chainofthought/}
}