Safe Policy Search for Lifelong Reinforcement Learning with Sublinear Regret
Abstract
Lifelong reinforcement learning provides a promising framework for developing versatile agents that can accumulate knowledge over a lifetime of experience and rapidly learn new tasks by building upon prior knowledge. However, current lifelong learning methods exhibit non-vanishing regret as the amount of experience increases, and include limitations that can lead to suboptimal or unsafe control policies. To address these issues, we develop a lifelong policy gradient learner that operates in an adversarial setting to learn multiple tasks online while enforcing safety constraints on the learned policies. We demonstrate, for the first time, sublinear regret for lifelong policy search, and validate our algorithm on several benchmark dynamical systems and an application to quadrotor control.
Cite
Text
Ammar et al. "Safe Policy Search for Lifelong Reinforcement Learning with Sublinear Regret." International Conference on Machine Learning, 2015.Markdown
[Ammar et al. "Safe Policy Search for Lifelong Reinforcement Learning with Sublinear Regret." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/ammar2015icml-safe/)BibTeX
@inproceedings{ammar2015icml-safe,
title = {{Safe Policy Search for Lifelong Reinforcement Learning with Sublinear Regret}},
author = {Ammar, Haitham Bou and Tutunov, Rasul and Eaton, Eric},
booktitle = {International Conference on Machine Learning},
year = {2015},
pages = {2361-2369},
volume = {37},
url = {https://mlanthology.org/icml/2015/ammar2015icml-safe/}
}