Lipschitz Lifelong Reinforcement Learning
Abstract
We consider the problem of knowledge transfer when an agent is facing a series of Reinforcement Learning (RL) tasks. We introduce a novel metric between Markov Decision Processes and establish that close MDPs have close optimal value functions. Formally, the optimal value functions are Lipschitz continuous with respect to the tasks space. These theoretical results lead us to a value-transfer method for Lifelong RL, which we use to build a PAC-MDP algorithm with improved convergence rate. Further, we show the method to experience no negative transfer with high probability. We illustrate the benefits of the method in Lifelong RL experiments.
Cite
Text
Lecarpentier et al. "Lipschitz Lifelong Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I9.17006Markdown
[Lecarpentier et al. "Lipschitz Lifelong Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/lecarpentier2021aaai-lipschitz/) doi:10.1609/AAAI.V35I9.17006BibTeX
@inproceedings{lecarpentier2021aaai-lipschitz,
title = {{Lipschitz Lifelong Reinforcement Learning}},
author = {Lecarpentier, Erwan and Abel, David and Asadi, Kavosh and Jinnai, Yuu and Rachelson, Emmanuel and Littman, Michael L.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {8270-8278},
doi = {10.1609/AAAI.V35I9.17006},
url = {https://mlanthology.org/aaai/2021/lecarpentier2021aaai-lipschitz/}
}