Temporal Difference Methods for the Variance of the Reward to Go
Abstract
In this paper we extend temporal difference policy evaluation algorithms to performance criteria that include the variance of the cumulative reward. Such criteria are useful for risk management, and are important in domains such as finance and process control. We propose variants of both TD(0) and LSTD(λ) with linear function approximation, prove their convergence, and demonstrate their utility in a 4-dimensional continuous state space problem.
Cite
Text
Tamar et al. "Temporal Difference Methods for the Variance of the Reward to Go." International Conference on Machine Learning, 2013.Markdown
[Tamar et al. "Temporal Difference Methods for the Variance of the Reward to Go." International Conference on Machine Learning, 2013.](https://mlanthology.org/icml/2013/tamar2013icml-temporal/)BibTeX
@inproceedings{tamar2013icml-temporal,
title = {{Temporal Difference Methods for the Variance of the Reward to Go}},
author = {Tamar, Aviv and Di Castro, Dotan and Mannor, Shie},
booktitle = {International Conference on Machine Learning},
year = {2013},
pages = {495-503},
volume = {28},
url = {https://mlanthology.org/icml/2013/tamar2013icml-temporal/}
}