Hierarchical Reinforcement Learning for Efficent Exploration and Transfer
Abstract
Sparse-reward domains are challenging for reinforcement learning algorithms since significant exploration is needed before encountering reward for the first time. Hierarchical reinforcement learning can facilitate exploration by reducing the number of decisions necessary before obtaining a reward. In this paper, we present a novel hierarchical reinforcement learning framework based on the compression of an invariant state space that is common to a range of tasks. The algorithm introduces subtasks which consist in moving between the state partitions induced by the compression. Results indicate that the algorithm can successfully solve complex sparse-reward domains, and transfer knowledge to solve new, previously unseen tasks more quickly.
Cite
Text
Steccanella et al. "Hierarchical Reinforcement Learning for Efficent Exploration and Transfer." ICML 2020 Workshops: LifelongML, 2020.Markdown
[Steccanella et al. "Hierarchical Reinforcement Learning for Efficent Exploration and Transfer." ICML 2020 Workshops: LifelongML, 2020.](https://mlanthology.org/icmlw/2020/steccanella2020icmlw-hierarchical/)BibTeX
@inproceedings{steccanella2020icmlw-hierarchical,
title = {{Hierarchical Reinforcement Learning for Efficent Exploration and Transfer}},
author = {Steccanella, Lorenzo and Totaro, Simone and Allonsius, Damien and Jonsson, Anders},
booktitle = {ICML 2020 Workshops: LifelongML},
year = {2020},
url = {https://mlanthology.org/icmlw/2020/steccanella2020icmlw-hierarchical/}
}