More Agents Is All You Need
Abstract
We find that, simply via a sampling-and-voting method, the performance of large language models (LLMs) scales with the number of agents instantiated. Also, this method, termed as Agent Forest, is orthogonal to existing complicated methods to further enhance LLMs, while the degree of enhancement is correlated to the task difficulty. We conduct comprehensive experiments on a wide range of LLM benchmarks to verify the presence of our finding, and to study the properties that can facilitate its occurrence. Our code is publicly available at: https://github.com/MoreAgentsIsAllYouNeed/AgentForest
Cite
Text
Li et al. "More Agents Is All You Need." Transactions on Machine Learning Research, 2024.Markdown
[Li et al. "More Agents Is All You Need." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/li2024tmlr-more/)BibTeX
@article{li2024tmlr-more,
title = {{More Agents Is All You Need}},
author = {Li, Junyou and Zhang, Qin and Yu, Yangbin and Fu, Qiang and Ye, Deheng},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/li2024tmlr-more/}
}