Deep Multi-Agent Reinforcement Learning with Discrete-Continuous Hybrid Action Spaces
Abstract
Deep Reinforcement Learning (DRL) has been applied to address a variety of cooperative multi-agent problems with either discrete action spaces or continuous action spaces. However, to the best of our knowledge, no previous work has ever succeeded in applying DRL to multi-agent problems with discrete-continuous hybrid (or parameterized) action spaces which is very common in practice. Our work fills this gap by proposing two novel algorithms: Deep Multi-Agent Parameterized Q-Networks (Deep MAPQN) and Deep Multi-Agent Hierarchical Hybrid Q-Networks (Deep MAHHQN). We follow the centralized training but decentralized execution paradigm: different levels of communication between different agents are used to facilitate the training process, while each agent executes its policy independently based on local observations during execution. Our empirical results on several challenging tasks (simulated RoboCup Soccer and game Ghost Story) show that both Deep MAPQN and Deep MAHHQN are effective and significantly outperform existing independent deep parameterized Q-learning method.
Cite
Text
Fu et al. "Deep Multi-Agent Reinforcement Learning with Discrete-Continuous Hybrid Action Spaces." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/323Markdown
[Fu et al. "Deep Multi-Agent Reinforcement Learning with Discrete-Continuous Hybrid Action Spaces." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/fu2019ijcai-deep/) doi:10.24963/IJCAI.2019/323BibTeX
@inproceedings{fu2019ijcai-deep,
title = {{Deep Multi-Agent Reinforcement Learning with Discrete-Continuous Hybrid Action Spaces}},
author = {Fu, Haotian and Tang, Hongyao and Hao, Jianye and Lei, Zihan and Chen, Yingfeng and Fan, Changjie},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {2329-2335},
doi = {10.24963/IJCAI.2019/323},
url = {https://mlanthology.org/ijcai/2019/fu2019ijcai-deep/}
}