Policy Gradients with Variance Related Risk Criteria
Abstract
Managing risk in dynamic decision problems is of cardinal importance in many fields such as finance and process control. The most common approach to defining risk is through various variance related criteria such as the Sharpe Ratio or the standard deviation adjusted reward. It is known that optimizing many of the variance related risk criteria is NP-hard. In this paper we devise a framework for local policy gradient style algorithms for reinforcement learning for variance related criteria. Our starting point is a new formula for the variance of the cost-to-go in episodic tasks. Using this formula we develop policy gradient algorithms for criteria that involve both the expected cost and the variance of the cost. We prove the convergence of these algorithms to local minima and demonstrate their applicability in a portfolio planning problem.
Cite
Text
Di Castro et al. "Policy Gradients with Variance Related Risk Criteria." International Conference on Machine Learning, 2012.Markdown
[Di Castro et al. "Policy Gradients with Variance Related Risk Criteria." International Conference on Machine Learning, 2012.](https://mlanthology.org/icml/2012/castro2012icml-policy/)BibTeX
@inproceedings{castro2012icml-policy,
title = {{Policy Gradients with Variance Related Risk Criteria}},
author = {Di Castro, Dotan and Tamar, Aviv and Mannor, Shie},
booktitle = {International Conference on Machine Learning},
year = {2012},
url = {https://mlanthology.org/icml/2012/castro2012icml-policy/}
}