RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning and Verification in Long-Horizon Generation
Abstract
We explore how iterative revising a chain of thoughts with the help of information retrieval significantly improves large language models' reasoning and generation ability in long-horizon generation tasks, while hugely mitigating hallucination. In particular, the proposed method --- retrieval-augmented thoughts (RAT) --- revises each thought step one by one with retrieved information relevant to the task query, the current and the past thought steps, after the initial zero-shot CoT is generated. Applying RAT to GPT-3.5, GPT-4, and CodeLLaMA substantially improves their performances on various long-horizon generation tasks; on average of relatively increasing rating scores by 13.63% on code generation, 16.96% on mathematical reasoning, 19.2% on creative writing, and 42.78% on embodied task planning.
Cite
Text
Wang et al. "RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning and Verification in Long-Horizon Generation." NeurIPS 2024 Workshops: OWA, 2024.Markdown
[Wang et al. "RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning and Verification in Long-Horizon Generation." NeurIPS 2024 Workshops: OWA, 2024.](https://mlanthology.org/neuripsw/2024/wang2024neuripsw-rat/)BibTeX
@inproceedings{wang2024neuripsw-rat,
title = {{RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning and Verification in Long-Horizon Generation}},
author = {Wang, Zihao and Liu, Anji and Lin, Haowei and Li, Jiaqi and Ma, Xiaojian and Liang, Yitao},
booktitle = {NeurIPS 2024 Workshops: OWA},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/wang2024neuripsw-rat/}
}