Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models

Abstract

We present STEP-BACK PROMPTING, a simple prompting technique that enables LLMs to do abstractions to derive high-level concepts and first principles from instances containing specific details. Using the concepts and principles to guide reasoning, LLMs significantly improve their abilities in following a correct reasoning path towards the solution. We conduct experiments of STEP-BACK PROMPTING with PaLM-2L, GPT-4 and Llama2-70B models, and observe substantial performance gains on various challenging reasoning-intensive tasks including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, STEP-BACK PROMPTING improves PaLM-2L performance on MMLU (Physics and Chemistry) by 7% and 11% respectively, TimeQA by 27%, and MuSiQue by 7%.

Cite

Text

Zheng et al. "Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models." International Conference on Learning Representations, 2024.

Markdown

[Zheng et al. "Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/zheng2024iclr-take/)

BibTeX

@inproceedings{zheng2024iclr-take,
  title     = {{Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models}},
  author    = {Zheng, Huaixiu Steven and Mishra, Swaroop and Chen, Xinyun and Cheng, Heng-Tze and Chi, Ed H. and Le, Quoc V and Zhou, Denny},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/zheng2024iclr-take/}
}