Learned Prefix Caching for Efficient LLM Inference

Abstract

Prefix caching is a key technique for reducing Large Language Model (LLM) inference costs. However, the prevalent least-recently-used (LRU) eviction algorithm has a large gap to the optimal algorithm. This paper introduces LPC, the first learned method to perform LLM prefix cache eviction. LPC leverages conversational content analysis to provide predictive guidance for eviction, determining which conversations are likely to continue. These insights, combined with last access timestamps, inform more effective cache management. Extensive evaluations across three real-world datasets demonstrate that LPC achieves 18-47% reductions in required cache sizes for equivalent hit ratios and has an 11% improvement in LLM prefilling throughput in an emulated environment.

Cite

Text

Yang et al. "Learned Prefix Caching for Efficient LLM Inference." Advances in Neural Information Processing Systems, 2025.

Markdown

[Yang et al. "Learned Prefix Caching for Efficient LLM Inference." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/yang2025neurips-learned/)

BibTeX

@inproceedings{yang2025neurips-learned,
  title     = {{Learned Prefix Caching for Efficient LLM Inference}},
  author    = {Yang, Dongsheng and Li, Austin and Li, Kai and Lloyd, Wyatt},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/yang2025neurips-learned/}
}