Communication Compression for Tensor Parallel LLM Inference

Abstract

Large Language Models (LLMs) have pushed the frontier of artificial intelligence but are comprised of hundreds of billions of parameters and operations. For faster inference latency, LLMs are deployed on multiple hardware accelerators through various Model Parallelism strategies. Our paper looks into the details on one such strategy - Tensor Parallelism - and proposes to reduce latency by compressing inter-accelerator communication. We leverage fine grained quantization techniques to compress selected activations between 3.5 - 4.5x. Our proposed method leads up to 2x reduction of time-to-first-token (TTFT) with negligible model performance degradation.

Cite

Text

Hansen-Palmus et al. "Communication Compression for Tensor Parallel LLM Inference." NeurIPS 2024 Workshops: Compression, 2024.

Markdown

[Hansen-Palmus et al. "Communication Compression for Tensor Parallel LLM Inference." NeurIPS 2024 Workshops: Compression, 2024.](https://mlanthology.org/neuripsw/2024/hansenpalmus2024neuripsw-communication/)

BibTeX

@inproceedings{hansenpalmus2024neuripsw-communication,
  title     = {{Communication Compression for Tensor Parallel LLM Inference}},
  author    = {Hansen-Palmus, Jan and Le, Michael Truong and Hausdörfer, Oliver and Verma, Alok},
  booktitle = {NeurIPS 2024 Workshops: Compression},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/hansenpalmus2024neuripsw-communication/}
}