Universal Self-Consistency for Large Language Models
Abstract
Self-consistency with chain-of-thought (CoT) prompting has demonstrated remarkable performance gain by utilizing multiple reasoning paths sampled from large language models (LLMs). However, self-consistency relies on heuristics to extract answers and aggregate multiple solutions, which is not applicable to solving tasks with free-form answers. In this work, we propose Universal Self-Consistency (USC), which leverages LLMs themselves to select the most consistent answer among multiple candidates. We evaluate USC on a variety of benchmarks, including mathematical reasoning, code generation, long-context summarization, and open-ended question answering. On open-ended generation tasks where the original self-consistency is not applicable, USC effectively leverages multiple samples and improves the performance. For math reasoning, USC matches the standard self-consistency performance without requiring the answer formats to be similar. Finally, without access to execution results, USC also performs on par with execution-based voting methods on code generation.
Cite
Text
Chen et al. "Universal Self-Consistency for Large Language Models." ICML 2024 Workshops: ICL, 2024.Markdown
[Chen et al. "Universal Self-Consistency for Large Language Models." ICML 2024 Workshops: ICL, 2024.](https://mlanthology.org/icmlw/2024/chen2024icmlw-universal/)BibTeX
@inproceedings{chen2024icmlw-universal,
title = {{Universal Self-Consistency for Large Language Models}},
author = {Chen, Xinyun and Aksitov, Renat and Alon, Uri and Ren, Jie and Xiao, Kefan and Yin, Pengcheng and Prakash, Sushant and Sutton, Charles and Wang, Xuezhi and Zhou, Denny},
booktitle = {ICML 2024 Workshops: ICL},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/chen2024icmlw-universal/}
}