Towards Tight Bounds on the Sample Complexity of Average-Reward MDPs

Abstract

We prove new upper and lower bounds for sample complexity of finding an $\epsilon$-optimal policy of an infinite-horizon average-reward Markov decision process (MDP) given access to a generative model. When the mixing time of the probability transition matrix of all policies is at most $t_\mathrm{mix}$, we provide an algorithm that solves the problem using $\widetilde{O}(t_\mathrm{mix} \epsilon^{-3})$ (oblivious) samples per state-action pair. Further, we provide a lower bound showing that a linear dependence on $t_\mathrm{mix}$ is necessary in the worst case for any algorithm which computes oblivious samples. We obtain our results by establishing connections between infinite-horizon average-reward MDPs and discounted MDPs of possible further utility.

Cite

Text

Jin and Sidford. "Towards Tight Bounds on the Sample Complexity of Average-Reward MDPs." International Conference on Machine Learning, 2021.

Markdown

[Jin and Sidford. "Towards Tight Bounds on the Sample Complexity of Average-Reward MDPs." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/jin2021icml-tight/)

BibTeX

@inproceedings{jin2021icml-tight,
  title     = {{Towards Tight Bounds on the Sample Complexity of Average-Reward MDPs}},
  author    = {Jin, Yujia and Sidford, Aaron},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {5055-5064},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/jin2021icml-tight/}
}