Finite-Memory Near-Optimal Learning for Markov Decision Processes with Long-Run Average Reward

Abstract

We consider learning policies online in Markov decision processes with the long-run average reward (a.k.a. mean payoff). To ensure implementability of the policies, we focus on policies with finite memory. Firstly, we show that near optimality can be achieved almost surely, using an unintuitive gadget we call forgetfulness. Secondly, we extend the approach to a setting with partial knowledge of the system topology, introducing two optimality measures and providing near-optimal algorithms also for these cases.

Cite

Text

Kretinsky et al. "Finite-Memory Near-Optimal Learning for Markov Decision Processes with Long-Run Average Reward." Uncertainty in Artificial Intelligence, 2020.

Markdown

[Kretinsky et al. "Finite-Memory Near-Optimal Learning for Markov Decision Processes with Long-Run Average Reward." Uncertainty in Artificial Intelligence, 2020.](https://mlanthology.org/uai/2020/kretinsky2020uai-finitememory/)

BibTeX

@inproceedings{kretinsky2020uai-finitememory,
  title     = {{Finite-Memory Near-Optimal Learning for Markov Decision Processes with Long-Run Average Reward}},
  author    = {Kretinsky, Jan and Michel, Fabian and Michel, Lukas and Perez, Guillermo},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2020},
  pages     = {1149-1158},
  volume    = {124},
  url       = {https://mlanthology.org/uai/2020/kretinsky2020uai-finitememory/}
}