Feudal Reinforcement Learning
Abstract
One way to speed up reinforcement learning is to enable learning to happen simultaneously at multiple resolutions in space and time. This paper shows how to create a Q-Iearning managerial hierarchy in which high level managers learn how to set tasks to their sub(cid:173) managers who, in turn, learn how to satisfy them. Sub-managers need not initially understand their managers' commands. They simply learn to maximise their reinforcement in the context of the current command. We illustrate the system using a simple maze task .. As the system learns how to get around, satisfying commands at the multiple levels, it explores more efficiently than standard, flat, Q-Iearning and builds a more comprehensive map.
Cite
Text
Dayan and Hinton. "Feudal Reinforcement Learning." Neural Information Processing Systems, 1992.Markdown
[Dayan and Hinton. "Feudal Reinforcement Learning." Neural Information Processing Systems, 1992.](https://mlanthology.org/neurips/1992/dayan1992neurips-feudal/)BibTeX
@inproceedings{dayan1992neurips-feudal,
title = {{Feudal Reinforcement Learning}},
author = {Dayan, Peter and Hinton, Geoffrey E.},
booktitle = {Neural Information Processing Systems},
year = {1992},
pages = {271-278},
url = {https://mlanthology.org/neurips/1992/dayan1992neurips-feudal/}
}