Re:Frame - Retrieving Experience from Associative Memory
Abstract
Transformers have demonstrated strong performance in offline reinforcement learning (RL) for Markovian tasks, due to their ability to process historical information efficiently. However, in partially observable environments, where agents must rely on past experiences to make decisions in the present, transformers are limited by their fixed context window and struggle to capture long-term dependencies. Extending this window indefinitely is not feasible due to the quadratic complexity of the attention mechanism. This limitation led us to explore other memory handling approaches. In neurobiology, associative memory allows the brain to link different stimuli by activating neurons simultaneously, creating associations between experiences that occurred around the same time. Motivated by this concept, we introduce **Re:Frame** (**R**etrieving **E**xperience **Fr**om **A**ssociative **Me**mory), a novel RL algorithm that enables agents to better utilize their past experiences. Re:Frame incorporates a long-term memory mechanism that enhances decision-making in complex tasks by integrating past and present information.
Cite
Text
Zelezetsky et al. "Re:Frame - Retrieving Experience from Associative Memory." ICLR 2025 Workshops: NFAM, 2025.Markdown
[Zelezetsky et al. "Re:Frame - Retrieving Experience from Associative Memory." ICLR 2025 Workshops: NFAM, 2025.](https://mlanthology.org/iclrw/2025/zelezetsky2025iclrw-re/)BibTeX
@inproceedings{zelezetsky2025iclrw-re,
title = {{Re:Frame - Retrieving Experience from Associative Memory}},
author = {Zelezetsky, Daniil and Cherepanov, Egor and Kovalev, Alexey and Panov, Aleksandr},
booktitle = {ICLR 2025 Workshops: NFAM},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/zelezetsky2025iclrw-re/}
}