MELODI: Exploring Memory Compression for Long Contexts
Abstract
We present MELODI, a novel memory architecture designed to efficiently process long documents using short context windows. The key principle behind MELODI is to represent short-term and long-term memory as a hierarchical compression scheme across both transformer layers and context windows. Specifically, the short-term memory is achieved through recurrent compression of context windows across multiple layers, ensuring smooth transitions between windows. In contrast, the long-term memory performs further compression within a single middle layer and aggregates information across context windows, effectively consolidating crucial information from the entire history. Compared to a strong baseline - the Memorizing Transformer employing dense attention over a large long-term memory (64K key-value pairs) - our method demonstrates superior performance on various long-context datasets while remarkably reducing the memory footprint by a factor of 8.
Cite
Text
Chen et al. "MELODI: Exploring Memory Compression for Long Contexts." International Conference on Learning Representations, 2025.Markdown
[Chen et al. "MELODI: Exploring Memory Compression for Long Contexts." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/chen2025iclr-melodi/)BibTeX
@inproceedings{chen2025iclr-melodi,
title = {{MELODI: Exploring Memory Compression for Long Contexts}},
author = {Chen, Yinpeng and Hutchins, DeLesley and Jansen, Aren and Zhmoginov, Andrey and Racz, David and Andersen, Jesper Sparre},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/chen2025iclr-melodi/}
}