Reinforcement Learning of Risk-Constrained Policies in Markov Decision Processes
Abstract
Markov decision processes (MDPs) are the defacto framework for sequential decision making in the presence of stochastic uncertainty. A classical optimization criterion for MDPs is to maximize the expected discounted-sum payoff, which ignores low probability catastrophic events with highly negative impact on the system. On the other hand, risk-averse policies require the probability of undesirable events to be below a given threshold, but they do not account for optimization of the expected payoff. We consider MDPs with discounted-sum payoff with failure states which represent catastrophic outcomes. The objective of risk-constrained planning is to maximize the expected discounted-sum payoff among risk-averse policies that ensure the probability to encounter a failure state is below a desired threshold. Our main contribution is an efficient risk-constrained planning algorithm that combines UCT-like search with a predictor learned through interaction with the MDP (in the style of AlphaZero) and with a risk-constrained action selection via linear programming. We demonstrate the effectiveness of our approach with experiments on classical MDPs from the literature, including benchmarks with an order of 106 states.
Cite
Text
Brázdil et al. "Reinforcement Learning of Risk-Constrained Policies in Markov Decision Processes." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I06.6531Markdown
[Brázdil et al. "Reinforcement Learning of Risk-Constrained Policies in Markov Decision Processes." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/brazdil2020aaai-reinforcement/) doi:10.1609/AAAI.V34I06.6531BibTeX
@inproceedings{brazdil2020aaai-reinforcement,
title = {{Reinforcement Learning of Risk-Constrained Policies in Markov Decision Processes}},
author = {Brázdil, Tomás and Chatterjee, Krishnendu and Novotný, Petr and Vahala, Jiri},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {9794-9801},
doi = {10.1609/AAAI.V34I06.6531},
url = {https://mlanthology.org/aaai/2020/brazdil2020aaai-reinforcement/}
}