MagicPIG: LSH Sampling for Efficient LLM Generation

Abstract

Large language models (LLMs) with long context windows have gained significant attention. However, the KV cache, stored to avoid re-computation, now becomes a bottleneck. Leveraging the common insight that attention is sparse, various dynamic sparse or TopK-based attention approximation methods have been proposed. In this paper, we first show that TopK attention itself suffers from a quality degradation in certain downstream tasks because attention is not always as sparse as expected. Rather than selecting the keys and values with the highest attention scores, sampling with theoretical guarantees can provide a better estimation for attention output. To make the sampling-based approximation practical in LLM generation, we propose MagicPIG, a heterogeneous system based on Locality Sensitive Hashing (LSH). MagicPIG significantly reduces the workload of attention computation while preserving high accuracy for diverse tasks. MagicPIG stores the LSH hash tables and runs the attention computation on CPU, which allows to serve longer contexts and larger batch sizes with high approximation accuracy. MagicPIG can improve decoding throughput by $1.9\sim3.9\times$ across various GPU hardware and achieve 110ms decoding latency on a single RTX 4090 for Llama-3.1-8B-Instruct model with a context of 96k tokens.

Cite

Text

Chen et al. "MagicPIG: LSH Sampling for Efficient LLM Generation." NeurIPS 2024 Workshops: AFM, 2024.

Markdown

[Chen et al. "MagicPIG: LSH Sampling for Efficient LLM Generation." NeurIPS 2024 Workshops: AFM, 2024.](https://mlanthology.org/neuripsw/2024/chen2024neuripsw-magicpig/)

BibTeX

@inproceedings{chen2024neuripsw-magicpig,
  title     = {{MagicPIG: LSH Sampling for Efficient LLM Generation}},
  author    = {Chen, Zhuoming and Sadhukhan, Ranajoy and Ye, Zihao and Zhou, Yang and Zhang, Jianyu and Nolte, Niklas and Tian, Yuandong and Douze, Matthijs and Bottou, Leon and Jia, Zhihao and Chen, Beidi},
  booktitle = {NeurIPS 2024 Workshops: AFM},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/chen2024neuripsw-magicpig/}
}