Hierarchical Reinforcement Learning with Parameters
Abstract
In this work we introduce and evaluate a model of Hierarchical Reinforcement Learning with Parameters. In the first stage we train agents to execute relatively simple actions like reaching or gripping. In the second stage we train a hierarchical manager to compose these actions to solve more complicated tasks. The manager may pass parameters to agents thus controlling details of undertaken actions. The hierarchical approach with parameters can be used with any optimization algorithm. In this work we adapt to our setting methods described in [1]. We show that their theoretical foundation, including monotonicity of improvements, still holds. We experimentally compare the hierarchical reinforcement learning with the standard, non-hierarchical approach and conclude that the hierarchical learning with parameters is a viable way to improve final results and stability of learning.
Cite
Text
Klimek et al. "Hierarchical Reinforcement Learning with Parameters." Proceedings of the 1st Annual Conference on Robot Learning, 2017.Markdown
[Klimek et al. "Hierarchical Reinforcement Learning with Parameters." Proceedings of the 1st Annual Conference on Robot Learning, 2017.](https://mlanthology.org/corl/2017/klimek2017corl-hierarchical/)BibTeX
@inproceedings{klimek2017corl-hierarchical,
title = {{Hierarchical Reinforcement Learning with Parameters}},
author = {Klimek, Maciej and Michalewski, Henryk and Miłoś, Piotr},
booktitle = {Proceedings of the 1st Annual Conference on Robot Learning},
year = {2017},
pages = {301-313},
volume = {78},
url = {https://mlanthology.org/corl/2017/klimek2017corl-hierarchical/}
}