Deep Decentralized Multi-Task Multi-Agent Reinforcement Learning Under Partial Observability
Abstract
Many real-world tasks involve multiple agents with partial observability and limited communication. Learning is challenging in these settings due to local viewpoints of agents, which perceive the world as non-stationary due to concurrently-exploring teammates. Approaches that learn specialized policies for individual tasks face problems when applied to the real world: not only do agents have to learn and store distinct policies for each task, but in practice identities of tasks are often non-observable, making these approaches inapplicable. This paper formalizes and addresses the problem of multi-task multi-agent reinforcement learning under partial observability. We introduce a decentralized single-task learning approach that is robust to concurrent interactions of teammates, and present an approach for distilling single-task policies into a unified policy that performs well across multiple related tasks, without explicit provision of task identity.
Cite
Text
Omidshafiei et al. "Deep Decentralized Multi-Task Multi-Agent Reinforcement Learning Under Partial Observability." International Conference on Machine Learning, 2017.Markdown
[Omidshafiei et al. "Deep Decentralized Multi-Task Multi-Agent Reinforcement Learning Under Partial Observability." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/omidshafiei2017icml-deep/)BibTeX
@inproceedings{omidshafiei2017icml-deep,
title = {{Deep Decentralized Multi-Task Multi-Agent Reinforcement Learning Under Partial Observability}},
author = {Omidshafiei, Shayegan and Pazis, Jason and Amato, Christopher and How, Jonathan P. and Vian, John},
booktitle = {International Conference on Machine Learning},
year = {2017},
pages = {2681-2690},
volume = {70},
url = {https://mlanthology.org/icml/2017/omidshafiei2017icml-deep/}
}