Representing Repeated Structure in Reinforcement Learning Using Symmetric Motifs
Abstract
Transition structures in reinforcement learning can contain repeated motifs and redun- dancies. In this preliminary work, we suggest using the geometric decomposition of the adjacency matrix to form a mapping into an abstract state space. Using the Successor Representation (SR) framework, we decouple symmetries in the translation structure from the reward structure, and form a natural structural hierarchy by using separate SRs for the global and local structures of a given task. We demonstrate that there is low error when performing policy evaluation using this method and that the resulting representations can be significantly compressed.
Cite
Text
Sargent et al. "Representing Repeated Structure in Reinforcement Learning Using Symmetric Motifs." NeurIPS 2022 Workshops: NeurReps, 2022.Markdown
[Sargent et al. "Representing Repeated Structure in Reinforcement Learning Using Symmetric Motifs." NeurIPS 2022 Workshops: NeurReps, 2022.](https://mlanthology.org/neuripsw/2022/sargent2022neuripsw-representing/)BibTeX
@inproceedings{sargent2022neuripsw-representing,
title = {{Representing Repeated Structure in Reinforcement Learning Using Symmetric Motifs}},
author = {Sargent, Matthew James and Mavor-Parker, Augustine N. and Bentley, Peter and Barry, Caswell},
booktitle = {NeurIPS 2022 Workshops: NeurReps},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/sargent2022neuripsw-representing/}
}