Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning
Abstract
Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems [tang2017exploration,bellemare2016unifying], we investigate when this paradigm is provably efficient. We study episodic Markov decision processes with rich observations generated from a small number of latent states. We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a no-regret tabular RL algorithm. Theoretically, we prove that as long as the unsupervised learning algorithm enjoys a polynomial sample complexity guarantee, we can find a near-optimal policy with sample complexity polynomial in the number of latent states, which is significantly smaller than the number of observations. Empirically, we instantiate our framework on a class of hard exploration problems to demonstrate the practicality of our theory.
Cite
Text
Feng et al. "Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning." Neural Information Processing Systems, 2020.Markdown
[Feng et al. "Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/feng2020neurips-provably-a/)BibTeX
@inproceedings{feng2020neurips-provably-a,
title = {{Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning}},
author = {Feng, Fei and Wang, Ruosong and Yin, Wotao and Du, Simon S and Yang, Lin},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/feng2020neurips-provably-a/}
}