Mean-Variance Optimization in Markov Decision Processes

Abstract

We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudopoly-nomial exact and approximation algorithms.

Cite

Text

Mannor and Tsitsiklis. "Mean-Variance Optimization in Markov Decision Processes." International Conference on Machine Learning, 2011.

Markdown

[Mannor and Tsitsiklis. "Mean-Variance Optimization in Markov Decision Processes." International Conference on Machine Learning, 2011.](https://mlanthology.org/icml/2011/mannor2011icml-mean/)

BibTeX

@inproceedings{mannor2011icml-mean,
  title     = {{Mean-Variance Optimization in Markov Decision Processes}},
  author    = {Mannor, Shie and Tsitsiklis, John N.},
  booktitle = {International Conference on Machine Learning},
  year      = {2011},
  pages     = {177-184},
  url       = {https://mlanthology.org/icml/2011/mannor2011icml-mean/}
}