A Comprehensive Survey on Safe Reinforcement Learning
Abstract
Safe Reinforcement Learning can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes. We categorize and analyze two approaches of Safe Reinforcement Learning. The first is based on the modification of the optimality criterion, the classic discounted finite/infinite horizon, with a safety factor. The second is based on the modification of the exploration process through the incorporation of external knowledge or the guidance of a risk metric. We use the proposed classification to survey the existing literature, as well as suggesting future directions for Safe Reinforcement Learning.
Cite
Text
García and Fernández. "A Comprehensive Survey on Safe Reinforcement Learning." Journal of Machine Learning Research, 2015.Markdown
[García and Fernández. "A Comprehensive Survey on Safe Reinforcement Learning." Journal of Machine Learning Research, 2015.](https://mlanthology.org/jmlr/2015/garcia2015jmlr-comprehensive/)BibTeX
@article{garcia2015jmlr-comprehensive,
title = {{A Comprehensive Survey on Safe Reinforcement Learning}},
author = {García, Javier and Fernández, Fernando},
journal = {Journal of Machine Learning Research},
year = {2015},
pages = {1437-1480},
volume = {16},
url = {https://mlanthology.org/jmlr/2015/garcia2015jmlr-comprehensive/}
}