Exploration-Exploitation in MDPs with Options

Abstract

While a large body of empirical results show that temporally-extended actions and options may significantly affect the learning performance of an agent, the theoretical understanding of how and when options can be beneficial in online reinforcement learning is relatively limited. In this paper, we derive an upper and lower bound on the regret of a variant of UCRL using options. While we first analyze the algorithm in the general case of semi-Markov decision processes (SMDPs), we show how these results can be translated to the specific case of MDPs with options and we illustrate simple scenarios in which the regret of learning with options can be provably much smaller than the regret suffered when learning with primitive actions.

Cite

Text

Fruit and Lazaric. "Exploration-Exploitation in MDPs with Options." International Conference on Artificial Intelligence and Statistics, 2017.

Markdown

[Fruit and Lazaric. "Exploration-Exploitation in MDPs with Options." International Conference on Artificial Intelligence and Statistics, 2017.](https://mlanthology.org/aistats/2017/fruit2017aistats-exploration/)

BibTeX

@inproceedings{fruit2017aistats-exploration,
  title     = {{Exploration-Exploitation in MDPs with Options}},
  author    = {Fruit, Ronan and Lazaric, Alessandro},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2017},
  pages     = {576-584},
  url       = {https://mlanthology.org/aistats/2017/fruit2017aistats-exploration/}
}