Model-Free Reinforcement Learning in Infinite-Horizon Average-Reward Markov Decision Processes

Abstract

Model-free reinforcement learning is known to be memory and computation efficient and more amendable to large scale problems. In this paper, two model-free algorithms are introduced for learning infinite-horizon average-reward Markov Decision Processes (MDPs). The first algorithm reduces the problem to the discounted-reward version and achieves $\mathcal{O}(T^{2/3})$ regret after $T$ steps, under the minimal assumption of weakly communicating MDPs. To our knowledge, this is the first model-free algorithm for general MDPs in this setting. The second algorithm makes use of recent advances in adaptive algorithms for adversarial multi-armed bandits and improves the regret to $\mathcal{O}(\sqrt{T})$, albeit with a stronger ergodic assumption. This result significantly improves over the $\mathcal{O}(T^{3/4})$ regret achieved by the only existing model-free algorithm by Abbasi-Yadkori et al. (2019) for ergodic MDPs in the infinite-horizon average-reward setting.

Cite

Text

Wei et al. "Model-Free Reinforcement Learning in Infinite-Horizon Average-Reward Markov Decision Processes." International Conference on Machine Learning, 2020.

Markdown

[Wei et al. "Model-Free Reinforcement Learning in Infinite-Horizon Average-Reward Markov Decision Processes." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/wei2020icml-modelfree/)

BibTeX

@inproceedings{wei2020icml-modelfree,
  title     = {{Model-Free Reinforcement Learning in Infinite-Horizon Average-Reward Markov Decision Processes}},
  author    = {Wei, Chen-Yu and Jahromi, Mehdi Jafarnia and Luo, Haipeng and Sharma, Hiteshi and Jain, Rahul},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {10170-10180},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/wei2020icml-modelfree/}
}