Can Large Language Models Reason Robustly with Noisy Rationales?
Abstract
This paper investigates an under-explored challenge in large language models (LLMs): their reasoning with noisy rationales—irrelevant or inaccurate reasoning steps—despite advancements in in-context learning and chain-of-thought strategies. We construct the NoRa dataset, specifically designed to evaluate LLMs' robustness to noisy rationales, based on which we reveal a widespread vulnerability among LLMs to such noise, with limited efficacy from existing robust methods.
Cite
Text
Zhou et al. "Can Large Language Models Reason Robustly with Noisy Rationales?." ICLR 2024 Workshops: R2-FM, 2024.Markdown
[Zhou et al. "Can Large Language Models Reason Robustly with Noisy Rationales?." ICLR 2024 Workshops: R2-FM, 2024.](https://mlanthology.org/iclrw/2024/zhou2024iclrw-large/)BibTeX
@inproceedings{zhou2024iclrw-large,
title = {{Can Large Language Models Reason Robustly with Noisy Rationales?}},
author = {Zhou, Zhanke and Tao, Rong and Zhu, Jianing and Luo, Yiwen and Wang, Zengmao and Han, Bo},
booktitle = {ICLR 2024 Workshops: R2-FM},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/zhou2024iclrw-large/}
}