A Meta-MDP Approach to Exploration for Lifelong Reinforcement Learning
Abstract
In this paper we consider the problem of how a reinforcement learning agent that is tasked with solving a sequence of reinforcement learning problems (a sequence of Markov decision processes) can use knowledge acquired early in its lifetime to improve its ability to solve new problems. We argue that previous experience with similar problems can provide an agent with information about how it should explore when facing a new but related problem. We show that the search for an optimal exploration strategy can be formulated as a reinforcement learning problem itself and demonstrate that such strategy can leverage patterns found in the structure of related problems. We conclude with experiments that show the benefits of optimizing an exploration strategy using our proposed framework.
Cite
Text
Garcia and Thomas. "A Meta-MDP Approach to Exploration for Lifelong Reinforcement Learning." Neural Information Processing Systems, 2019.Markdown
[Garcia and Thomas. "A Meta-MDP Approach to Exploration for Lifelong Reinforcement Learning." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/garcia2019neurips-metamdp/)BibTeX
@inproceedings{garcia2019neurips-metamdp,
title = {{A Meta-MDP Approach to Exploration for Lifelong Reinforcement Learning}},
author = {Garcia, Francisco and Thomas, Philip S.},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {5691-5700},
url = {https://mlanthology.org/neurips/2019/garcia2019neurips-metamdp/}
}