CoMIX: A Multi-Agent Reinforcement Learning Training Architecture for Efficient Decentralized Coordination and Independent Decision-Making

Abstract

Robust coordination skills enable agents to operate cohesively in shared environments, together towards a common goal and, ideally, individually without hindering each other's progress. To this end, this paper presents Coordinated QMIX (CoMIX), a novel training framework for decentralized agents that enables emergent coordination through flexible policies, allowing at the same time independent decision-making at individual level. CoMIX models selfish and collaborative behavior as incremental steps in each agent's decision process. This allows agents to dynamically adapt their behavior to different situations balancing independence and collaboration. Experiments using a variety of simulation environments demonstrate that CoMIX outperforms baselines on collaborative tasks. The results validate our incremental approach as effective technique for improving coordination in multi-agent systems.

Cite

Text

Minelli and Musolesi. "CoMIX: A Multi-Agent Reinforcement Learning Training Architecture for Efficient Decentralized Coordination and Independent Decision-Making." Transactions on Machine Learning Research, 2024.

Markdown

[Minelli and Musolesi. "CoMIX: A Multi-Agent Reinforcement Learning Training Architecture for Efficient Decentralized Coordination and Independent Decision-Making." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/minelli2024tmlr-comix/)

BibTeX

@article{minelli2024tmlr-comix,
  title     = {{CoMIX: A Multi-Agent Reinforcement Learning Training Architecture for Efficient Decentralized Coordination and Independent Decision-Making}},
  author    = {Minelli, Giovanni and Musolesi, Mirco},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/minelli2024tmlr-comix/}
}