Online Regret Bounds for Markov Decision Processes with Deterministic Transitions

Abstract

We consider an upper confidence bound algorithm for Markov decision processes (MDPs) with deterministic transitions. For this algorithm we derive upper bounds on the online regret (with respect to an ( ε -)optimal policy) that are logarithmic in the number of steps taken. These bounds also match known asymptotic bounds for the general MDP setting. We also present corresponding lower bounds. As an application, multi-armed bandits with switching cost are considered.

Cite

Text

Ortner. "Online Regret Bounds for Markov Decision Processes with Deterministic Transitions." International Conference on Algorithmic Learning Theory, 2008. doi:10.1007/978-3-540-87987-9_14

Markdown

[Ortner. "Online Regret Bounds for Markov Decision Processes with Deterministic Transitions." International Conference on Algorithmic Learning Theory, 2008.](https://mlanthology.org/alt/2008/ortner2008alt-online/) doi:10.1007/978-3-540-87987-9_14

BibTeX

@inproceedings{ortner2008alt-online,
  title     = {{Online Regret Bounds for Markov Decision Processes with Deterministic Transitions}},
  author    = {Ortner, Ronald},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {2008},
  pages     = {123-137},
  doi       = {10.1007/978-3-540-87987-9_14},
  url       = {https://mlanthology.org/alt/2008/ortner2008alt-online/}
}