PC-MLP: Model-Based Reinforcement Learning with Policy Cover Guided Exploration

Abstract

Model-based Reinforcement Learning (RL) is a popular learning paradigm due to its potential sample efficiency compared to model-free RL. However, existing empirical model-based RL approaches lack the ability to explore. This work studies a computationally and statistically efficient model-based algorithm for both Kernelized Nonlinear Regulators (KNR) and linear Markov Decision Processes (MDPs). For both models, our algorithm guarantees polynomial sample complexity and only uses access to a planning oracle. Experimentally, we first demonstrate the flexibility and the efficacy of our algorithm on a set of exploration challenging control tasks where existing empirical model-based RL approaches completely fail. We then show that our approach retains excellent performance even in common dense reward control benchmarks that do not require heavy exploration.

Cite

Text

Song and Sun. "PC-MLP: Model-Based Reinforcement Learning with Policy Cover Guided Exploration." International Conference on Machine Learning, 2021.

Markdown

[Song and Sun. "PC-MLP: Model-Based Reinforcement Learning with Policy Cover Guided Exploration." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/song2021icml-pcmlp/)

BibTeX

@inproceedings{song2021icml-pcmlp,
  title     = {{PC-MLP: Model-Based Reinforcement Learning with Policy Cover Guided Exploration}},
  author    = {Song, Yuda and Sun, Wen},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {9801-9811},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/song2021icml-pcmlp/}
}