Reinforcement Learning Experience Reuse with Policy Residual Representation
Abstract
Experience reuse is key to sample-efficient reinforcement learning. One of the critical issues is how the experience is represented and stored. Previously, the experience can be stored in the forms of features, individual models, and the average model, each lying at a different granularity. However, new tasks may require experience across multiple granularities. In this paper, we propose the policy residual representation (PRR) network, which can extract and store multiple levels of experience. PRR network is trained on a set of tasks with a multi-level architecture, where a module in each level corresponds to a subset of the tasks. Therefore, the PRR network represents the experience in a spectrum-like way. When training on a new task, PRR can provide different levels of experience for accelerating the learning. We experiment with the PRR network on a set of grid world navigation tasks, locomotion tasks, and fighting tasks in a video game. The results show that the PRR network leads to better reuse of experience and thus outperforms some state-of-the-art approaches.
Cite
Text
Zhou et al. "Reinforcement Learning Experience Reuse with Policy Residual Representation." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/618Markdown
[Zhou et al. "Reinforcement Learning Experience Reuse with Policy Residual Representation." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/zhou2019ijcai-reinforcement/) doi:10.24963/IJCAI.2019/618BibTeX
@inproceedings{zhou2019ijcai-reinforcement,
title = {{Reinforcement Learning Experience Reuse with Policy Residual Representation}},
author = {Zhou, Wen-Ji and Yu, Yang and Chen, Yingfeng and Guan, Kai and Lv, Tangjie and Fan, Changjie and Zhou, Zhi-Hua},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {4447-4453},
doi = {10.24963/IJCAI.2019/618},
url = {https://mlanthology.org/ijcai/2019/zhou2019ijcai-reinforcement/}
}