Macro-Action-Based Deep Multi-Agent Reinforcement Learning
Abstract
In real-world multi-robot systems, performing high-quality, collaborative behaviors requires robots to asynchronously reason about high-level action selection at varying time durations. Macro-Action Decentralized Partially Observable Markov Decision Processes (MacDec-POMDPs) provide a general framework for asynchronous decision making under uncertainty in fully cooperative multi-agent tasks. However, multi-agent deep reinforcement learning methods have only been developed for (synchronous) primitive-action problems. This paper proposes two Deep Q-Network (DQN) based methods for learning decentralized and centralized macro-action-value functions with novel macro-action trajectory replay buffers introduced for each case. Evaluations on benchmark problems and a larger domain demonstrate the advantage of learning with macro-actions over primitive-actions and the scalability of our approaches.
Cite
Text
Xiao et al. "Macro-Action-Based Deep Multi-Agent Reinforcement Learning." Conference on Robot Learning, 2019.Markdown
[Xiao et al. "Macro-Action-Based Deep Multi-Agent Reinforcement Learning." Conference on Robot Learning, 2019.](https://mlanthology.org/corl/2019/xiao2019corl-macroactionbased/)BibTeX
@inproceedings{xiao2019corl-macroactionbased,
title = {{Macro-Action-Based Deep Multi-Agent Reinforcement Learning}},
author = {Xiao, Yuchen and Hoffman, Joshua and Amato, Christopher},
booktitle = {Conference on Robot Learning},
year = {2019},
pages = {1146-1161},
volume = {100},
url = {https://mlanthology.org/corl/2019/xiao2019corl-macroactionbased/}
}