Recurrent Reinforcement Learning with Memoroids

Abstract

Memory models such as Recurrent Neural Networks (RNNs) and Transformers address Partially Observable Markov Decision Processes (POMDPs) by mapping trajectories to latent Markov states. Neither model scales particularly well to long sequences, especially compared to an emerging class of memory models called Linear Recurrent Models. We discover that the recurrent update of these models resembles a monoid, leading us to reformulate existing models using a novel monoid-based framework that we call memoroids. We revisit the traditional approach to batching in recurrent reinforcement learning, highlighting theoretical and empirical deficiencies. We leverage memoroids to propose a batching method that improves sample efficiency, increases the return, and simplifies the implementation of recurrent loss functions in reinforcement learning.

Cite

Text

Morad et al. "Recurrent Reinforcement Learning with Memoroids." Neural Information Processing Systems, 2024. doi:10.52202/079017-0459

Markdown

[Morad et al. "Recurrent Reinforcement Learning with Memoroids." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/morad2024neurips-recurrent/) doi:10.52202/079017-0459

BibTeX

@inproceedings{morad2024neurips-recurrent,
  title     = {{Recurrent Reinforcement Learning with Memoroids}},
  author    = {Morad, Steven and Lu, Chris and Kortvelesy, Ryan and Liwicki, Stephan and Foerster, Jakob and Prorok, Amanda},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0459},
  url       = {https://mlanthology.org/neurips/2024/morad2024neurips-recurrent/}
}