Inexperienced RL Agents Can’t Get It Right: Lower Bounds on Regret at Finite Sample Complexity
Abstract
We consider a family $\mathcal M$ of MDPs over given state and action spaces, and an agent that is sequentially confronted with tasks from $\mathcal M$. Although stated for this stepwise change in distributions, the insight we develop is informative for continually changing distributions as well. In order to study how structure of $\mathcal M$, viewed as a learning environment, impacts the learning efficiency of the agent, we formulate an RL analog of fat shattering dimension for MDP families and show that this implies a nontrivial lower bound on regret as long as insufficiently many steps have been taken. More precisely, for some constant $c$ which depends on shattering $d$ states, an inexperienced agent that has explored the learning environment for fewer than $d$ steps will necessarily have regret above $c$ on some MDP in the family.
Cite
Text
Fraser and Létourneau. "Inexperienced RL Agents Can’t Get It Right: Lower Bounds on Regret at Finite Sample Complexity." Proceedings of The 1st Conference on Lifelong Learning Agents, 2022.Markdown
[Fraser and Létourneau. "Inexperienced RL Agents Can’t Get It Right: Lower Bounds on Regret at Finite Sample Complexity." Proceedings of The 1st Conference on Lifelong Learning Agents, 2022.](https://mlanthology.org/collas/2022/fraser2022collas-inexperienced/)BibTeX
@inproceedings{fraser2022collas-inexperienced,
title = {{Inexperienced RL Agents Can’t Get It Right: Lower Bounds on Regret at Finite Sample Complexity}},
author = {Fraser, Maia and Létourneau, Vincent},
booktitle = {Proceedings of The 1st Conference on Lifelong Learning Agents},
year = {2022},
pages = {327-334},
volume = {199},
url = {https://mlanthology.org/collas/2022/fraser2022collas-inexperienced/}
}