State-Based Episodic Memory for Multi-Agent Reinforcement Learning

Abstract

Multi-agent reinforcement learning (MARL) algorithms have made promising progress in recent years by leveraging the centralized training and decentralized execution (CTDE) paradigm. However, existing MARL algorithms still suffer from the sample inefficiency problem. In this paper, we propose a simple yet effective approach, called state-based episodic memory (SEM), to improve sample efficiency in MARL. SEM adopts episodic memory (EM) to supervise the centralized training procedure of CTDE in MARL. To the best of our knowledge, SEM is the first work to introduce EM into MARL. SEM has lower space complexity and time complexity than state and action based EM (SAEM) initially proposed for single-agent reinforcement learning when using for MARL. Experimental results on two synthetic environments and one real environment show that introducing episodic memory into MARL can improve sample efficiency, and SEM can reduce storage cost and time cost compared with SAEM.

Cite

Text

Ma and Li. "State-Based Episodic Memory for Multi-Agent Reinforcement Learning." Machine Learning, 2023. doi:10.1007/S10994-023-06365-2

Markdown

[Ma and Li. "State-Based Episodic Memory for Multi-Agent Reinforcement Learning." Machine Learning, 2023.](https://mlanthology.org/mlj/2023/ma2023mlj-statebased/) doi:10.1007/S10994-023-06365-2

BibTeX

@article{ma2023mlj-statebased,
  title     = {{State-Based Episodic Memory for Multi-Agent Reinforcement Learning}},
  author    = {Ma, Xiao and Li, Wu-Jun},
  journal   = {Machine Learning},
  year      = {2023},
  pages     = {5163-5190},
  doi       = {10.1007/S10994-023-06365-2},
  volume    = {112},
  url       = {https://mlanthology.org/mlj/2023/ma2023mlj-statebased/}
}