Regret Bounds for Reinforcement Learning via Markov Chain Concentration
Abstract
We give a simple optimistic algorithm for which it is easy to derive regret bounds of O(sqrt{t-mix SAT}) steps in uniformly ergodic Markov decision processes with S states, A actions, and mixing time parameter t-mix. These bounds are the first regret bounds in the general, non-episodic setting with an optimal dependence on all given parameters. They could only be improved by using an alternative mixing time parameter.
Cite
Text
Ortner. "Regret Bounds for Reinforcement Learning via Markov Chain Concentration." Journal of Artificial Intelligence Research, 2020. doi:10.1613/JAIR.1.11316Markdown
[Ortner. "Regret Bounds for Reinforcement Learning via Markov Chain Concentration." Journal of Artificial Intelligence Research, 2020.](https://mlanthology.org/jair/2020/ortner2020jair-regret/) doi:10.1613/JAIR.1.11316BibTeX
@article{ortner2020jair-regret,
title = {{Regret Bounds for Reinforcement Learning via Markov Chain Concentration}},
author = {Ortner, Ronald},
journal = {Journal of Artificial Intelligence Research},
year = {2020},
pages = {115-128},
doi = {10.1613/JAIR.1.11316},
volume = {67},
url = {https://mlanthology.org/jair/2020/ortner2020jair-regret/}
}