Variational Regret Bounds for Reinforcement Learning

Abstract

We consider undiscounted reinforcement learning in Markov decision processes (MDPs) where \textit{both} the reward functions and the state-transition probabilities may vary (gradually or abruptly) over time. For this problem setting, we propose an algorithm and provide performance guarantees for the regret evaluated against the optimal non-stationary policy. The upper bound on the regret is given in terms of the total variation in the MDP. This is the first variational regret bound for the general reinforcement learning setting.

Cite

Text

Ortner et al. "Variational Regret Bounds for Reinforcement Learning." Uncertainty in Artificial Intelligence, 2019.

Markdown

[Ortner et al. "Variational Regret Bounds for Reinforcement Learning." Uncertainty in Artificial Intelligence, 2019.](https://mlanthology.org/uai/2019/ortner2019uai-variational/)

BibTeX

@inproceedings{ortner2019uai-variational,
  title     = {{Variational Regret Bounds for Reinforcement Learning}},
  author    = {Ortner, Ronald and Gajane, Pratik and Auer, Peter},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2019},
  pages     = {81-90},
  volume    = {115},
  url       = {https://mlanthology.org/uai/2019/ortner2019uai-variational/}
}