TreeKV: Smooth Key-Value Cache Compression with Tree Structures
Abstract
Efficient key-value (KV) cache compression is critical for scaling transformer-based Large Language Models (LLMs) in long sequences and resource-limited settings. Existing methods evict tokens based on their positions or importance, but position-based strategies can miss crucial information outside predefined regions, while those relying on global importance scores resulting in strong regional biases, limiting the KV cache's overall context retention and potentially impairing the performance of LLMs on complex tasks. Our wavelet analysis reveals that as tokens approach the end of sequence, their contributions to generation gradually increase and tends to diverge more from neighboring tokens, indicating a smooth transition with increasing complexity and variability from distant to nearby context. Motivated by this observation, we propose TreeKV, an intuitive, training-free method that employs a tree structure for smooth cache compression. TreeKV maintains a fixed cache size, allowing LLMs to deliver high-quality output in long text scenarios and is applicable during both the generation and prefilling stages. TreeKV consistently surpasses all baseline models in language modeling tasks on PG19 and OpenWebText2, allowing LLMs trained with short context window to generalize to longer window with a 16x cache reduction. On the Longbench benchmark, TreeKV achieves the best performance with only 6% of the budget at optimal efficiency.
Cite
Text
He et al. "TreeKV: Smooth Key-Value Cache Compression with Tree Structures." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/899Markdown
[He et al. "TreeKV: Smooth Key-Value Cache Compression with Tree Structures." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/he2025ijcai-treekv/) doi:10.24963/IJCAI.2025/899BibTeX
@inproceedings{he2025ijcai-treekv,
title = {{TreeKV: Smooth Key-Value Cache Compression with Tree Structures}},
author = {He, Ziwei and Yuan, Jian and Bai, Haoli and Leng, Jingwen and Jiang, Bo},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2025},
pages = {8086-8094},
doi = {10.24963/IJCAI.2025/899},
url = {https://mlanthology.org/ijcai/2025/he2025ijcai-treekv/}
}