Monitoring Teams of AI Agents

Abstract

Background: Generative AI agents will need to work together, which requires monitoring and managing their performance. Objectives: The chief objective of this paper is to understand the joint design choice of the number of agents and their rewards. Methods: We study this problem in a theoretical framework of optimal incentives, where a system designer (principal) selects the environment in which multiple autonomous decentralized AI agents work together. These agents respond to incentives, such as rewards and penalties. We first consider a principal who selects the size of the agent team in addition to their incentives. Results: We prove a general result that the optimal team size will vary with the parameters of the environment, but the optimal incentives will not. This invariance property shows that agents should have different-sized teams on work projects rather than differing financial incentives. Conclusions: We show these results are robust to a more general framework, where the principal employs a supervisory AI agent to manage the tasks of the underlying AI team. Finally, we propose different levels of quality for the supervisory and worker agents, and find that it is efficient to match the best supervisors with the best worker agents.

Cite

Text

Ray. "Monitoring Teams of AI Agents." Journal of Artificial Intelligence Research, 2025. doi:10.1613/JAIR.1.19798

Markdown

[Ray. "Monitoring Teams of AI Agents." Journal of Artificial Intelligence Research, 2025.](https://mlanthology.org/jair/2025/ray2025jair-monitoring/) doi:10.1613/JAIR.1.19798

BibTeX

@article{ray2025jair-monitoring,
  title     = {{Monitoring Teams of AI Agents}},
  author    = {Ray, Korok},
  journal   = {Journal of Artificial Intelligence Research},
  year      = {2025},
  doi       = {10.1613/JAIR.1.19798},
  volume    = {84},
  url       = {https://mlanthology.org/jair/2025/ray2025jair-monitoring/}
}