Distributed Multitask Reinforcement Learning with Quadratic Convergence
Abstract
Multitask reinforcement learning (MTRL) suffers from scalability issues when the number of tasks or trajectories grows large. The main reason behind this drawback is the reliance on centeralised solutions. Recent methods exploited the connection between MTRL and general consensus to propose scalable solutions. These methods, however, suffer from two drawbacks. First, they rely on predefined objectives, and, second, exhibit linear convergence guarantees. In this paper, we improve over state-of-the-art by deriving multitask reinforcement learning from a variational inference perspective. We then propose a novel distributed solver for MTRL with quadratic convergence guarantees.
Cite
Text
Tutunov et al. "Distributed Multitask Reinforcement Learning with Quadratic Convergence." Neural Information Processing Systems, 2018.Markdown
[Tutunov et al. "Distributed Multitask Reinforcement Learning with Quadratic Convergence." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/tutunov2018neurips-distributed/)BibTeX
@inproceedings{tutunov2018neurips-distributed,
title = {{Distributed Multitask Reinforcement Learning with Quadratic Convergence}},
author = {Tutunov, Rasul and Kim, Dongho and Ammar, Haitham Bou},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {8907-8916},
url = {https://mlanthology.org/neurips/2018/tutunov2018neurips-distributed/}
}