SEAPoT-RL: Selective Exploration Algorithm for Policy Transfer in RL

Abstract

We propose a new method for transferring a policy from a source task to a target task in model-based reinforcement learning. Our work is motivated by scenarios where a robotic agent operates in similar but challenging environments, such as hospital wards, differentiated by structural arrangements or obstacles, such as furniture. We address problems that require fast responses adapted from incomplete, prior knowledge of the agent in new scenarios. We present an efficient selective exploration strategy that maximally reuses the source task policy. Reuse efficiency is effected through identifying sub-spaces that are different in the target environment, thus limiting the exploration needed in the target task. We empirically show that SEAPoT performs better in terms of jump starts and cumulative average rewards, as compared to existing state-of-the-art policy reuse methods.

Cite

Text

Narayan et al. "SEAPoT-RL: Selective Exploration Algorithm for Policy Transfer in RL." AAAI Conference on Artificial Intelligence, 2017. doi:10.1609/AAAI.V31I1.11104

Markdown

[Narayan et al. "SEAPoT-RL: Selective Exploration Algorithm for Policy Transfer in RL." AAAI Conference on Artificial Intelligence, 2017.](https://mlanthology.org/aaai/2017/narayan2017aaai-seapot/) doi:10.1609/AAAI.V31I1.11104

BibTeX

@inproceedings{narayan2017aaai-seapot,
  title     = {{SEAPoT-RL: Selective Exploration Algorithm for Policy Transfer in RL}},
  author    = {Narayan, Akshay and Li, Zhuoru and Leong, Tze-Yun},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2017},
  pages     = {4975-4976},
  doi       = {10.1609/AAAI.V31I1.11104},
  url       = {https://mlanthology.org/aaai/2017/narayan2017aaai-seapot/}
}