Cooperative Exploration for Multi-Agent Deep Reinforcement Learning
Abstract
Exploration is critical for good results in deep reinforcement learning and has attracted much attention. However, existing multi-agent deep reinforcement learning algorithms still use mostly noise-based techniques. Very recently, exploration methods that consider cooperation among multiple agents have been developed. However, existing methods suffer from a common challenge: agents struggle to identify states that are worth exploring, and hardly coordinate exploration efforts toward those states. To address this shortcoming, in this paper, we propose cooperative multi-agent exploration (CMAE): agents share a common goal while exploring. The goal is selected from multiple projected state spaces by a normalized entropy-based technique. Then, agents are trained to reach the goal in a coordinated manner. We demonstrate that CMAE consistently outperforms baselines on various tasks, including a sparse-reward version of multiple-particle environment (MPE) and the Starcraft multi-agent challenge (SMAC).
Cite
Text
Liu et al. "Cooperative Exploration for Multi-Agent Deep Reinforcement Learning." International Conference on Machine Learning, 2021.Markdown
[Liu et al. "Cooperative Exploration for Multi-Agent Deep Reinforcement Learning." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/liu2021icml-cooperative/)BibTeX
@inproceedings{liu2021icml-cooperative,
title = {{Cooperative Exploration for Multi-Agent Deep Reinforcement Learning}},
author = {Liu, Iou-Jen and Jain, Unnat and Yeh, Raymond A and Schwing, Alexander},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {6826-6836},
volume = {139},
url = {https://mlanthology.org/icml/2021/liu2021icml-cooperative/}
}