Minimax Regret Bounds for Reinforcement Learning
Abstract
We consider the problem of provably optimal exploration in reinforcement learning for finite horizon MDPs. We show that an optimistic modification to value iteration achieves a regret bound of $\tilde {O}( \sqrt{HSAT} + H^2S^2A+H\sqrt{T})$ where $H$ is the time horizon, $S$ the number of states, $A$ the number of actions and $T$ the number of time-steps. This result improves over the best previous known bound $\tilde {O}(HS \sqrt{AT})$ achieved by the UCRL2 algorithm. The key significance of our new results is that when $T\geq H^3S^3A$ and $SA\geq H$, it leads to a regret of $\tilde{O}(\sqrt{HSAT})$ that matches the established lower bound of $\Omega(\sqrt{HSAT})$ up to a logarithmic factor. Our analysis contain two key insights. We use careful application of concentration inequalities to the optimal value function as a whole, rather than to the transitions probabilities (to improve scaling in $S$), and we define Bernstein-based “exploration bonuses” that use the empirical variance of the estimated values at the next states (to improve scaling in $H$).
Cite
Text
Azar et al. "Minimax Regret Bounds for Reinforcement Learning." International Conference on Machine Learning, 2017.Markdown
[Azar et al. "Minimax Regret Bounds for Reinforcement Learning." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/azar2017icml-minimax/)BibTeX
@inproceedings{azar2017icml-minimax,
title = {{Minimax Regret Bounds for Reinforcement Learning}},
author = {Azar, Mohammad Gheshlaghi and Osband, Ian and Munos, Rémi},
booktitle = {International Conference on Machine Learning},
year = {2017},
pages = {263-272},
volume = {70},
url = {https://mlanthology.org/icml/2017/azar2017icml-minimax/}
}