Embodied LLM Agents Learn to Cooperate in Organized Teams
Abstract
Large Language Models (LLMs) have emerged as integral tools for reasoning, planning, and decision-making, drawing upon their extensive world knowledge and proficiency in language-related tasks. LLMs thus hold tremendous potential for natural language interaction within multi-agent systems to foster cooperation. However, LLM agents tend to over-report and comply with any instruction, which may result in information redundancy and confusion in multi-agent cooperation. Inspired by human organizations, this paper introduces a framework that imposes prompt-based organization structures on LLM agents to mitigate these problems. Through a series of experiments with embodied LLM agents and human-agent collaboration, our results highlight the impact of designated leadership on team efficiency, shedding light on the leadership qualities displayed by LLM agents and their spontaneous cooperative behaviors. Further, we harness the potential of LLMs to propose enhanced organizational prompts, via a Criticize-Reflect process, resulting in novel organization structures that reduce communication costs and enhance team efficiency.
Cite
Text
Guo et al. "Embodied LLM Agents Learn to Cooperate in Organized Teams." NeurIPS 2024 Workshops: LanGame, 2024.Markdown
[Guo et al. "Embodied LLM Agents Learn to Cooperate in Organized Teams." NeurIPS 2024 Workshops: LanGame, 2024.](https://mlanthology.org/neuripsw/2024/guo2024neuripsw-embodied/)BibTeX
@inproceedings{guo2024neuripsw-embodied,
title = {{Embodied LLM Agents Learn to Cooperate in Organized Teams}},
author = {Guo, Xudong and Huang, Kaixuan and Liu, Jiale and Fan, Wenhui and Vélez, Natalia and Wu, Qingyun and Wang, Huazheng and Griffiths, Thomas L. and Wang, Mengdi},
booktitle = {NeurIPS 2024 Workshops: LanGame},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/guo2024neuripsw-embodied/}
}