A Decentralized Policy Gradient Approach to Multi-Task Reinforcement Learning

Abstract

We develop a mathematical framework for solving multi-task reinforcement learning (MTRL) problems based on a type of policy gradient method. The goal in MTRL is to learn a common policy that operates effectively in different environments; these environments have similar (or overlapping) state spaces, but have different rewards and dynamics. We highlight two fundamental challenges in MTRL that are not present in its single task counterpart, and illustrate them with simple examples. We then develop a decentralized entropyregularized policy gradient method for solving the MTRL problem, and study its finite-time convergence rate. We demonstrate the effectiveness of the proposed method using a series of numerical experiments. These experiments range from small-scale "GridWorld" problems that readily demonstrate the trade-offs involved in multi-task learning to large-scale problems, where common policies are learned to navigate an airborne drone in multiple (simulated) environments.

Cite

Text

Zeng et al. "A Decentralized Policy Gradient Approach to Multi-Task Reinforcement Learning." Uncertainty in Artificial Intelligence, 2021.

Markdown

[Zeng et al. "A Decentralized Policy Gradient Approach to Multi-Task Reinforcement Learning." Uncertainty in Artificial Intelligence, 2021.](https://mlanthology.org/uai/2021/zeng2021uai-decentralized/)

BibTeX

@inproceedings{zeng2021uai-decentralized,
  title     = {{A Decentralized Policy Gradient Approach to Multi-Task Reinforcement Learning}},
  author    = {Zeng, Sihan and Anwar, Malik Aqeel and Doan, Thinh T. and Raychowdhury, Arijit and Romberg, Justin},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2021},
  pages     = {1002-1012},
  volume    = {161},
  url       = {https://mlanthology.org/uai/2021/zeng2021uai-decentralized/}
}