Near-Optimal Regret Bounds for Reinforcement Learning

Abstract

For undiscounted reinforcement learning in Markov decision processes (MDPs) we consider the total regret of a learning algorithm with respect to an optimal policy. In order to describe the transition structure of an MDP we propose a new parameter: An MDP has diameter D if for any pair of states s1,s2 there is a policy which moves from s1 to s2 in at most D steps (on average). We present a reinforcement learning algorithm with total regret O(DSAT) after T steps for any unknown MDP with S states, A actions per state, and diameter D. This bound holds with high probability. We also present a corresponding lower bound of Omega(DSAT) on the total regret of any learning algorithm. Both bounds demonstrate the utility of the diameter as structural parameter of the MDP.

Cite

Text

Auer et al. "Near-Optimal Regret Bounds for Reinforcement Learning." Neural Information Processing Systems, 2008.

Markdown

[Auer et al. "Near-Optimal Regret Bounds for Reinforcement Learning." Neural Information Processing Systems, 2008.](https://mlanthology.org/neurips/2008/auer2008neurips-nearoptimal/)

BibTeX

@inproceedings{auer2008neurips-nearoptimal,
  title     = {{Near-Optimal Regret Bounds for Reinforcement Learning}},
  author    = {Auer, Peter and Jaksch, Thomas and Ortner, Ronald},
  booktitle = {Neural Information Processing Systems},
  year      = {2008},
  pages     = {89-96},
  url       = {https://mlanthology.org/neurips/2008/auer2008neurips-nearoptimal/}
}