Distributed Inference Performance Optimization for LLMs on CPUs

Abstract

Large language models (LLMs) hold tremendous potential for addressing numerous real-world challenges, yet they typically demand significant computational resources and memory. Deploying LLMs onto a resource-limited hardware device with restricted memory capacity presents considerable challenges. Distributed computing emerges as a prevalent strategy to mitigate single-node memory constraints and expedite LLM inference performance. To reduce the hardware limitation burden, we proposed an efficient distributed inference optimization solution for LLMs on CPUs. We conduct experiments with the proposed solution on 5th Gen Intel Xeon Scalable Processors, and the result shows the time per output token for the LLM with 72B parameter is 140 ms/token, much faster than the average human reading speed about 200ms per token.

Cite

Text

He et al. "Distributed Inference Performance Optimization for LLMs on CPUs." ICLR 2024 Workshops: PML4LRS, 2024.

Markdown

[He et al. "Distributed Inference Performance Optimization for LLMs on CPUs." ICLR 2024 Workshops: PML4LRS, 2024.](https://mlanthology.org/iclrw/2024/he2024iclrw-distributed/)

BibTeX

@inproceedings{he2024iclrw-distributed,
  title     = {{Distributed Inference Performance Optimization for LLMs on CPUs}},
  author    = {He, Pujiang and Zhou, Shan and Li, Changqing and Huang, Wenhuan and Yu, Weifei and Wang, Duyi and Meng, Chen and Gui, Sheng},
  booktitle = {ICLR 2024 Workshops: PML4LRS},
  year      = {2024},
  url       = {https://mlanthology.org/iclrw/2024/he2024iclrw-distributed/}
}