An Optimal Online Method of Selecting Source Policies for Reinforcement Learning

Abstract

Transfer learning significantly accelerates the reinforcement learning process by exploiting relevant knowledge from previous experiences. The problem of optimally selecting source policies during the learning process is of great importance yet challenging. There has been little theoretical analysis of this problem. In this paper, we develop an optimal online method to select source policies for reinforcement learning. This method formulates online source policy selection as a multi-armed bandit problem and augments Q-learning with policy reuse. We provide theoretical guarantees of the optimal selection process and convergence to the optimal policy. In addition, we conduct experiments on a grid-based robot navigation domain to demonstrate its efficiency and robustness by comparing to the state-of-the-art transfer learning method.

Cite

Text

Li and Zhang. "An Optimal Online Method of Selecting Source Policies for Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.11718

Markdown

[Li and Zhang. "An Optimal Online Method of Selecting Source Policies for Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/li2018aaai-optimal/) doi:10.1609/AAAI.V32I1.11718

BibTeX

@inproceedings{li2018aaai-optimal,
  title     = {{An Optimal Online Method of Selecting Source Policies for Reinforcement Learning}},
  author    = {Li, Siyuan and Zhang, Chongjie},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {3562-3570},
  doi       = {10.1609/AAAI.V32I1.11718},
  url       = {https://mlanthology.org/aaai/2018/li2018aaai-optimal/}
}