Large Language Models Miss the Multi-Agent Mark

Abstract

Recent interest in Multi-Agent Systems of Large Language Models (MAS LLMs) has led to an increase in frameworks leveraging multiple LLMs to tackle complex tasks. However, much of this literature appropriates the terminology of MAS without engaging with its foundational principles. In this position paper, we highlight critical discrepancies between MAS theory and current MAS LLMs implementations, focusing on four key areas: the social aspect of agency, environment design, coordination and communication protocols, and measuring emergent behaviours. Our position is that many MAS LLMs lack multi-agent characteristics such as autonomy, social interaction, and structured environments, and often rely on oversimplified, LLM-centric architectures. The field may slow down and lose traction by revisiting problems the MAS literature has already addressed. Therefore, we systematically analyse this issue and outline associated research opportunities; we advocate for better integrating established MAS concepts and more precise terminology to avoid mischaracterisation and missed opportunities.

Cite

Text

La Malfa et al. "Large Language Models Miss the Multi-Agent Mark." Advances in Neural Information Processing Systems, 2025.

Markdown

[La Malfa et al. "Large Language Models Miss the Multi-Agent Mark." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/malfa2025neurips-large/)

BibTeX

@inproceedings{malfa2025neurips-large,
  title     = {{Large Language Models Miss the Multi-Agent Mark}},
  author    = {La Malfa, Emanuele and La Malfa, Gabriele and Marro, Samuele and Zhang, Jie M. and Black, Elizabeth and Luck, Michael and Torr, Philip and Wooldridge, Michael J.},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/malfa2025neurips-large/}
}