Hierarchies of Reward Machines
Abstract
Reward machines (RMs) are a recent formalism for representing the reward function of a reinforcement learning task through a finite-state machine whose edges encode subgoals of the task using high-level events. The structure of RMs enables the decomposition of a task into simpler and independently solvable subtasks that help tackle long-horizon and/or sparse reward tasks. We propose a formalism for further abstracting the subtask structure by endowing an RM with the ability to call other RMs, thus composing a hierarchy of RMs (HRM). We exploit HRMs by treating each call to an RM as an independently solvable subtask using the options framework, and describe a curriculum-based method to learn HRMs from traces observed by the agent. Our experiments reveal that exploiting a handcrafted HRM leads to faster convergence than with a flat HRM, and that learning an HRM is feasible in cases where its equivalent flat representation is not.
Cite
Text
Furelos-Blanco et al. "Hierarchies of Reward Machines." International Conference on Machine Learning, 2023.Markdown
[Furelos-Blanco et al. "Hierarchies of Reward Machines." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/furelosblanco2023icml-hierarchies/)BibTeX
@inproceedings{furelosblanco2023icml-hierarchies,
title = {{Hierarchies of Reward Machines}},
author = {Furelos-Blanco, Daniel and Law, Mark and Jonsson, Anders and Broda, Krysia and Russo, Alessandra},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {10494-10541},
volume = {202},
url = {https://mlanthology.org/icml/2023/furelosblanco2023icml-hierarchies/}
}