Coordinated Exploration in Concurrent Reinforcement Learning
Abstract
We consider a team of reinforcement learning agents that concurrently learn to operate in a common environment. We identify three properties - adaptivity, commitment, and diversity - which are necessary for efficient coordinated exploration and demonstrate that straightforward extensions to single-agent optimistic and posterior sampling approaches fail to satisfy them. As an alternative, we propose seed sampling, which extends posterior sampling in a manner that meets these requirements. Simulation results investigate how per-agent regret decreases as the number of agents grows, establishing substantial advantages of seed sampling over alternative exploration schemes.
Cite
Text
Dimakopoulou and Van Roy. "Coordinated Exploration in Concurrent Reinforcement Learning." International Conference on Machine Learning, 2018.Markdown
[Dimakopoulou and Van Roy. "Coordinated Exploration in Concurrent Reinforcement Learning." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/dimakopoulou2018icml-coordinated/)BibTeX
@inproceedings{dimakopoulou2018icml-coordinated,
title = {{Coordinated Exploration in Concurrent Reinforcement Learning}},
author = {Dimakopoulou, Maria and Van Roy, Benjamin},
booktitle = {International Conference on Machine Learning},
year = {2018},
pages = {1271-1279},
volume = {80},
url = {https://mlanthology.org/icml/2018/dimakopoulou2018icml-coordinated/}
}