SAMBA: Safe Model-Based & Active Reinforcement Learning
Abstract
In this paper, we propose SAMBA, a novel framework for safe reinforcement learning that combines aspects from probabilistic modelling, information theory, and statistics. Our method builds upon PILCO to enable active exploration using novel acquisition functions for out-of-sample Gaussian process evaluation optimised through a multi-objective problem that supports conditional-value-at-risk constraints. We evaluate our algorithm on a variety of safe dynamical system benchmarks involving both low and high-dimensional state representations. Our results show orders of magnitude reductions in samples and violations compared to state-of-the-art methods. Lastly, we provide intuition as to the effectiveness of the framework by a detailed analysis of our acquisition functions and safety constraints.
Cite
Text
Cowen-Rivers et al. "SAMBA: Safe Model-Based & Active Reinforcement Learning." Machine Learning, 2022. doi:10.1007/S10994-021-06103-6Markdown
[Cowen-Rivers et al. "SAMBA: Safe Model-Based & Active Reinforcement Learning." Machine Learning, 2022.](https://mlanthology.org/mlj/2022/cowenrivers2022mlj-samba/) doi:10.1007/S10994-021-06103-6BibTeX
@article{cowenrivers2022mlj-samba,
title = {{SAMBA: Safe Model-Based & Active Reinforcement Learning}},
author = {Cowen-Rivers, Alexander I. and Palenicek, Daniel and Moens, Vincent and Abdullah, Mohammed Amin and Sootla, Aivar and Wang, Jun and Bou-Ammar, Haitham},
journal = {Machine Learning},
year = {2022},
pages = {173-203},
doi = {10.1007/S10994-021-06103-6},
volume = {111},
url = {https://mlanthology.org/mlj/2022/cowenrivers2022mlj-samba/}
}