Bounded Parameter Markov Decision Processes with Average Reward Criterion
Abstract
Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with uncertainty in the parameters of a Markov Decision Process (MDP). Unlike the case of an MDP, the notion of an optimal policy for a BMDP is not entirely straightforward. We consider two notions of optimality based on optimistic and pessimistic criteria. These have been analyzed for discounted BMDPs. Here we provide results for average reward BMDPs. We establish a fundamental relationship between the discounted and the average reward problems, prove the existence of Blackwell optimal policies and, for both notions of optimality, derive algorithms that converge to the optimal value function.
Cite
Text
Tewari and Bartlett. "Bounded Parameter Markov Decision Processes with Average Reward Criterion." Annual Conference on Computational Learning Theory, 2007. doi:10.1007/978-3-540-72927-3_20Markdown
[Tewari and Bartlett. "Bounded Parameter Markov Decision Processes with Average Reward Criterion." Annual Conference on Computational Learning Theory, 2007.](https://mlanthology.org/colt/2007/tewari2007colt-bounded/) doi:10.1007/978-3-540-72927-3_20BibTeX
@inproceedings{tewari2007colt-bounded,
title = {{Bounded Parameter Markov Decision Processes with Average Reward Criterion}},
author = {Tewari, Ambuj and Bartlett, Peter L.},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2007},
pages = {263-277},
doi = {10.1007/978-3-540-72927-3_20},
url = {https://mlanthology.org/colt/2007/tewari2007colt-bounded/}
}