Model Approximation for HEXQ Hierarchical Reinforcement Learning
Abstract
HEXQ is a reinforcement learning algorithm that discovers hierarchical structure automatically. The generated task hierarchy represents the problem at different levels of abstraction. In this paper we extend HEXQ with heuristics that automatically approximate the structure of the task hierarchy. Construction, learning and execution time, as well as storage requirements of a task hierarchy may be significantly reduced and traded off against solution quality.
Cite
Text
Hengst. "Model Approximation for HEXQ Hierarchical Reinforcement Learning." European Conference on Machine Learning, 2004. doi:10.1007/978-3-540-30115-8_16Markdown
[Hengst. "Model Approximation for HEXQ Hierarchical Reinforcement Learning." European Conference on Machine Learning, 2004.](https://mlanthology.org/ecmlpkdd/2004/hengst2004ecml-model/) doi:10.1007/978-3-540-30115-8_16BibTeX
@inproceedings{hengst2004ecml-model,
title = {{Model Approximation for HEXQ Hierarchical Reinforcement Learning}},
author = {Hengst, Bernhard},
booktitle = {European Conference on Machine Learning},
year = {2004},
pages = {144-155},
doi = {10.1007/978-3-540-30115-8_16},
url = {https://mlanthology.org/ecmlpkdd/2004/hengst2004ecml-model/}
}