Discovering Options for Exploration by Minimizing Cover Time
Abstract
One of the main challenges in reinforcement learning is solving tasks with sparse reward. We show that the difficulty of discovering a distant rewarding state in an MDP is bounded by the expected cover time of a random walk over the graph induced by the MDP’s transition dynamics. We therefore propose to accelerate exploration by constructing options that minimize cover time. We introduce a new option discovery algorithm that diminishes the expected cover time by connecting the most distant states in the state-space graph with options. We show empirically that the proposed algorithm improves learning in several domains with sparse rewards.
Cite
Text
Jinnai et al. "Discovering Options for Exploration by Minimizing Cover Time." International Conference on Machine Learning, 2019.Markdown
[Jinnai et al. "Discovering Options for Exploration by Minimizing Cover Time." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/jinnai2019icml-discovering/)BibTeX
@inproceedings{jinnai2019icml-discovering,
title = {{Discovering Options for Exploration by Minimizing Cover Time}},
author = {Jinnai, Yuu and Park, Jee Won and Abel, David and Konidaris, George},
booktitle = {International Conference on Machine Learning},
year = {2019},
pages = {3130-3139},
volume = {97},
url = {https://mlanthology.org/icml/2019/jinnai2019icml-discovering/}
}