Enhancing Cooperative Multi-Agent Reinforcement Learning with State Modelling and Adversarial Exploration
Abstract
Learning to cooperate in distributed partially observable environments with no communication abilities poses significant challenges for multi-agent deep reinforcement learning (MARL). This paper addresses key concerns in this domain, focusing on inferring state representations from individual agent observations and leveraging these representations to enhance agents’ exploration and collaborative task execution policies. To this end, we propose a novel state modelling framework for cooperative MARL, where agents infer meaningful belief representations of the non-observable state, with respect to optimizing their own policies, while filtering redundant and less informative joint state information. Building upon this framework, we propose the MARL SMPE$^2$ algorithm. In SMPE$^2$, agents enhance their own policy’s discriminative abilities under partial observability, explicitly by incorporating their beliefs into the policy network, and implicitly by adopting an adversarial type of exploration policies which encourages agents to discover novel, high-value states while improving the discriminative abilities of others. Experimentally, we show that SMPE$^2$ outperforms a plethora of state-of-the-art MARL algorithms in complex fully cooperative tasks from the MPE, LBF, and RWARE benchmarks.
Cite
Text
Kontogiannis et al. "Enhancing Cooperative Multi-Agent Reinforcement Learning with State Modelling and Adversarial Exploration." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Kontogiannis et al. "Enhancing Cooperative Multi-Agent Reinforcement Learning with State Modelling and Adversarial Exploration." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/kontogiannis2025icml-enhancing/)BibTeX
@inproceedings{kontogiannis2025icml-enhancing,
title = {{Enhancing Cooperative Multi-Agent Reinforcement Learning with State Modelling and Adversarial Exploration}},
author = {Kontogiannis, Andreas and Papathanasiou, Konstantinos and Shen, Yi and Stamou, Giorgos and Zavlanos, Michael M. and Vouros, George},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {31437-31466},
volume = {267},
url = {https://mlanthology.org/icml/2025/kontogiannis2025icml-enhancing/}
}