Multi-Agent Reinforcement Learning with Communication-Constrained Priors
Abstract
Communication is one of the effective means to improve the learning of cooperative policy in multi-agent systems. However, in most real-world scenarios, lossy communication is a prevalent issue. Existing multi-agent reinforcement learning with communication, due to their limited scalability and robustness, struggles to apply to complex and dynamic real-world environments. To address these challenges, we propose a generalized communication-constrained model to uniformly characterize communication conditions across different scenarios. Based on this, we utilize it as a learning prior to distinguish between lossy and lossless messages for specific scenarios. Additionally, we decouple the impact of lossy and lossless messages on distributed decision-making, drawing on a dual mutual information estimatior, and introduce a communication-constrained multi-agent reinforcement learning framework, quantifying the impact of communication messages into the global reward. Finally, we validate the effectiveness of our approach across several communication-constrained benchmarks.
Cite
Text
Yang et al. "Multi-Agent Reinforcement Learning with Communication-Constrained Priors." Advances in Neural Information Processing Systems, 2025.Markdown
[Yang et al. "Multi-Agent Reinforcement Learning with Communication-Constrained Priors." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/yang2025neurips-multiagent/)BibTeX
@inproceedings{yang2025neurips-multiagent,
title = {{Multi-Agent Reinforcement Learning with Communication-Constrained Priors}},
author = {Yang, Guang and Yang, Tianpei and Qiao, Jingwen and Wu, Yanqing and Huo, Jing and Chen, Xingguo and Gao, Yang},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/yang2025neurips-multiagent/}
}