Adaptive Reward-Free Exploration
Abstract
Reward-free exploration is a reinforcement learning setting recently studied by (Jin et al. 2020), who address it by running several algorithms with regret guarantees in parallel. In our work, we instead propose a more natural adaptive approach for reward-free exploration which directly reduces upper bounds on the maximum MDP estimation error. We show that, interestingly, our reward-free UCRL algorithm can be seen as a variant of an algorithm by Fiechter from 1994, originally proposed for a different objective that we call best-policy identification. We prove that RF-UCRL needs of order (SAH^4/\epsilon^2)(log(1/\delta) + S) episodes to output, with probability 1-\delta, an \epsilon-approximation of the optimal policy for any reward function. This bound improves over existing sample complexity bounds in both the small \epsilon and the small \delta regimes. We further investigate the relative complexities of reward-free exploration and best policy identification.
Cite
Text
Kaufmann et al. "Adaptive Reward-Free Exploration." Proceedings of the 32nd International Conference on Algorithmic Learning Theory, 2021.Markdown
[Kaufmann et al. "Adaptive Reward-Free Exploration." Proceedings of the 32nd International Conference on Algorithmic Learning Theory, 2021.](https://mlanthology.org/alt/2021/kaufmann2021alt-adaptive/)BibTeX
@inproceedings{kaufmann2021alt-adaptive,
title = {{Adaptive Reward-Free Exploration}},
author = {Kaufmann, Emilie and Ménard, Pierre and Darwiche Domingues, Omar and Jonsson, Anders and Leurent, Edouard and Valko, Michal},
booktitle = {Proceedings of the 32nd International Conference on Algorithmic Learning Theory},
year = {2021},
pages = {865-891},
volume = {132},
url = {https://mlanthology.org/alt/2021/kaufmann2021alt-adaptive/}
}