Scaling Large Language Model-Based Multi-Agent Collaboration
Abstract
Recent breakthroughs in large language model-driven autonomous agents have revealed that multi-agent collaboration often surpasses each individual through collective reasoning. Inspired by the neural scaling law—increasing neurons enhances performance, this study explores whether the continuous addition of collaborative agents can yield similar benefits. Technically, we utilize directed acyclic graphs to organize agents into a multi-agent collaboration network (MacNet), upon which their interactive reasoning is topologically orchestrated for autonomous task solving. Extensive evaluations reveal that it effectively supports collaboration among over a thousand agents, with irregular topologies outperforming regular ones. We also identify a collaborative scaling law—the overall performance follows a logistic growth pattern as agents scale, with collaborative emergence occurring earlier than traditional neural emergence. We speculate this may be because scaling agents catalyzes their multidimensional considerations during interactive reflection and refinement, thereby producing more comprehensive artifacts. The code is available at https://github.com/OpenBMB/ChatDev/tree/macnet.
Cite
Text
Qian et al. "Scaling Large Language Model-Based Multi-Agent Collaboration." International Conference on Learning Representations, 2025.Markdown
[Qian et al. "Scaling Large Language Model-Based Multi-Agent Collaboration." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/qian2025iclr-scaling/)BibTeX
@inproceedings{qian2025iclr-scaling,
title = {{Scaling Large Language Model-Based Multi-Agent Collaboration}},
author = {Qian, Chen and Xie, Zihao and Wang, YiFei and Liu, Wei and Zhu, Kunlun and Xia, Hanchen and Dang, Yufan and Du, Zhuoyun and Chen, Weize and Yang, Cheng and Liu, Zhiyuan and Sun, Maosong},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/qian2025iclr-scaling/}
}