Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline

Abstract

Large language models (LLMs) have revolutionized the field of AI, demonstrating unprecedented capacity across various tasks. However, the inference process for LLMs comes with significant computational costs. In this paper, we propose an efficient LLM inference pipeline that harnesses the power of LLMs. Our approach begins by tapping into the potential of LLMs to accurately perceive and predict the response length with minimal overhead. By leveraging this information, we introduce an efficient sequence scheduling technique that groups queries with similar response lengths into micro-batches. We evaluate our approach on real-world instruction datasets using the LLaMA-based model, and our results demonstrate an impressive 86% improvement in inference throughput without compromising effectiveness. Notably, our method is orthogonal to other inference acceleration techniques, making it a valuable addition to many existing toolkits (e.g., FlashAttention, Quantization) for LLM inference.

Cite

Text

Zheng et al. "Response Length Perception and  Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline." Neural Information Processing Systems, 2023.

Markdown

[Zheng et al. "Response Length Perception and  Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/zheng2023neurips-response/)

BibTeX

@inproceedings{zheng2023neurips-response,
  title     = {{Response Length Perception and  Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline}},
  author    = {Zheng, Zangwei and Ren, Xiaozhe and Xue, Fuzhao and Luo, Yang and Jiang, Xin and You, Yang},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/zheng2023neurips-response/}
}