Hierarchical Orchestra of Policies
Abstract
Continual reinforcement learning poses a major challenge due to the tendency of agents to experience catastrophic forgetting when learning sequential tasks. In this paper, we introduce a modularity-based approach, called Hierarchical Orchestra of Policies (HOP), designed to mitigate catastrophic forgetting in lifelong reinforcement learning. HOP dynamically forms a hierarchy of policies based on a similarity metric between the current observations and previously encountered observations in successful tasks. Unlike other state-of-the-art methods, HOP does not require task labelling, allowing for robust adaptation in environments where boundaries between tasks are ambiguous. Our experiments, conducted across multiple tasks in a procedurally generated suite of environments, demonstrate that HOP significantly outperforms baseline methods in retaining knowledge across tasks and performs comparably to state-of-the-art transfer methods that require task labelling. Moreover, HOP achieves this without compromising performance when tasks remain constant, highlighting its versatility.
Cite
Text
Cannon and Şimşek. "Hierarchical Orchestra of Policies." NeurIPS 2024 Workshops: IMOL, 2024.Markdown
[Cannon and Şimşek. "Hierarchical Orchestra of Policies." NeurIPS 2024 Workshops: IMOL, 2024.](https://mlanthology.org/neuripsw/2024/cannon2024neuripsw-hierarchical/)BibTeX
@inproceedings{cannon2024neuripsw-hierarchical,
title = {{Hierarchical Orchestra of Policies}},
author = {Cannon, Thomas P and Şimşek, Özgür},
booktitle = {NeurIPS 2024 Workshops: IMOL},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/cannon2024neuripsw-hierarchical/}
}