Risk-Averse Bayes-Adaptive Reinforcement Learning
Abstract
In this work, we address risk-averse Bayes-adaptive reinforcement learning. We pose the problem of optimising the conditional value at risk (CVaR) of the total return in Bayes-adaptive Markov decision processes (MDPs). We show that a policy optimising CVaR in this setting is risk-averse to both the epistemic uncertainty due to the prior distribution over MDPs, and the aleatoric uncertainty due to the inherent stochasticity of MDPs. We reformulate the problem as a two-player stochastic game and propose an approximate algorithm based on Monte Carlo tree search and Bayesian optimisation. Our experiments demonstrate that our approach significantly outperforms baseline approaches for this problem.
Cite
Text
Rigter et al. "Risk-Averse Bayes-Adaptive Reinforcement Learning." Neural Information Processing Systems, 2021.Markdown
[Rigter et al. "Risk-Averse Bayes-Adaptive Reinforcement Learning." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/rigter2021neurips-riskaverse/)BibTeX
@inproceedings{rigter2021neurips-riskaverse,
title = {{Risk-Averse Bayes-Adaptive Reinforcement Learning}},
author = {Rigter, Marc and Lacerda, Bruno and Hawes, Nick},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/rigter2021neurips-riskaverse/}
}