The Sample-Complexity of General Reinforcement Learning
Abstract
We study the sample-complexity of reinforcement learning in a general setting without assuming ergodicity or finiteness of the environment. Instead, we define a topology on the space of environments and show that if an environment class is compact with respect to this topology then finite sample-complexity bounds are possible and give an algorithm achieving these bounds. We also show the existence of environment classes that are non-compact where finite sample-complexity bounds are not achievable. A lower bound is presented that matches the upper bound except for logarithmic factors.
Cite
Text
Lattimore et al. "The Sample-Complexity of General Reinforcement Learning." International Conference on Machine Learning, 2013.Markdown
[Lattimore et al. "The Sample-Complexity of General Reinforcement Learning." International Conference on Machine Learning, 2013.](https://mlanthology.org/icml/2013/lattimore2013icml-samplecomplexity/)BibTeX
@inproceedings{lattimore2013icml-samplecomplexity,
title = {{The Sample-Complexity of General Reinforcement Learning}},
author = {Lattimore, Tor and Hutter, Marcus and Sunehag, Peter},
booktitle = {International Conference on Machine Learning},
year = {2013},
pages = {28-36},
volume = {28},
url = {https://mlanthology.org/icml/2013/lattimore2013icml-samplecomplexity/}
}