DACOM: Learning Delay-Aware Communication for Multi-Agent Reinforcement Learning

Abstract

Communication is supposed to improve multi-agent collaboration and overall performance in cooperative Multi-agent reinforcement learning (MARL). However, such improvements are prevalently limited in practice since most existing communication schemes ignore communication overheads (e.g., communication delays). In this paper, we demonstrate that ignoring communication delays has detrimental effects on collaborations, especially in delay-sensitive tasks such as autonomous driving. To mitigate this impact, we design a delay-aware multi-agent communication model (DACOM) to adapt communication to delays. Specifically, DACOM introduces a component, TimeNet, that is responsible for adjusting the waiting time of an agent to receive messages from other agents such that the uncertainty associated with delay can be addressed. Our experiments reveal that DACOM has a non-negligible performance improvement over other mechanisms by making a better trade-off between the benefits of communication and the costs of waiting for messages.

Cite

Text

Yuan et al. "DACOM: Learning Delay-Aware Communication for Multi-Agent Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I10.26389

Markdown

[Yuan et al. "DACOM: Learning Delay-Aware Communication for Multi-Agent Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/yuan2023aaai-dacom/) doi:10.1609/AAAI.V37I10.26389

BibTeX

@inproceedings{yuan2023aaai-dacom,
  title     = {{DACOM: Learning Delay-Aware Communication for Multi-Agent Reinforcement Learning}},
  author    = {Yuan, Tingting and Chung, Hwei-Ming and Yuan, Jie and Fu, Xiaoming},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {11763-11771},
  doi       = {10.1609/AAAI.V37I10.26389},
  url       = {https://mlanthology.org/aaai/2023/yuan2023aaai-dacom/}
}