Gossip-Based Actor-Learner Architectures for Deep Reinforcement Learning
Abstract
Multi-simulator training has contributed to the recent success of Deep Reinforcement Learning (Deep RL) by stabilizing learning and allowing for higher training throughputs. In this work, we propose Gossip-based Actor-Learner Architectures (GALA) where several actor-learners (such as A2C agents) are organized in a peer-to-peer communication topology, and exchange information through asynchronous gossip in order to take advantage of a large number of distributed simulators. We prove that GALA agents remain within an epsilon-ball of one-another during training when using loosely coupled asynchronous communication. By reducing the amount of synchronization between agents, GALA is more computationally efficient and scalable compared to A2C, its fully-synchronous counterpart. GALA also outperforms A2C, being more robust and sample efficient. We show that we can run several loosely coupled GALA agents in parallel on a single GPU and achieve significantly higher hardware utilization and frame-rates than vanilla A2C at comparable power draws.
Cite
Text
Assran et al. "Gossip-Based Actor-Learner Architectures for Deep Reinforcement Learning." Neural Information Processing Systems, 2019.Markdown
[Assran et al. "Gossip-Based Actor-Learner Architectures for Deep Reinforcement Learning." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/assran2019neurips-gossipbased/)BibTeX
@inproceedings{assran2019neurips-gossipbased,
title = {{Gossip-Based Actor-Learner Architectures for Deep Reinforcement Learning}},
author = {Assran, Mahmoud and Romoff, Joshua and Ballas, Nicolas and Pineau, Joelle and Rabbat, Michael},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {13320-13330},
url = {https://mlanthology.org/neurips/2019/assran2019neurips-gossipbased/}
}