Dynamic-Width Speculative Beam Decoding for LLM Inference

Abstract

Large language models (LLMs) based on transformer architecture have shown outstanding performance across numerous real-world tasks. However, the autoregressive nature of these models makes the inference process slow and costly. Speculative decoding has emerged as a promising solution, leveraging a smaller auxiliary model to draft future tokens, which are then validated simultaneously by the larger model, achieving a speed-up of 1-2x. Although speculative decoding matches the same distribution as multinomial sampling, multinomial sampling itself is prone to suboptimal outputs, where as beam sampling is widely recognized for producing higher-quality results by maintaining multiple candidate sequences at each step. This paper explores the novel integration of speculative decoding with beam sampling. However, there are four key challenges: (1) how to generate multiple sequences from the larger model's distribution given drafts sequences from the small model; (2) how to dynamically optimize the number of beams to balance efficiency and accuracy; (3) how to efficiently verify the multiple drafts in parallel; and (4) how to address the extra memory costs inherent in beam sampling. To address these challenges, we propose dynamic-width speculative beam decoding (DSBD). Specifically, we first introduce a novel draft and verification scheme that generates multiple sequences following the large model's distribution based on beam sampling trajectories from the small model. Then, we introduce an adaptive mechanism to dynamically tune the number of beams based on the context, optimizing efficiency and effectiveness. Besides, we extend tree-based parallel verification to handle multiple trees simultaneously, accelerating the verification process. Finally, we illustrate a simple modification to our algorithm to mitigate the memory overhead of beam sampling. Experimental results show that our approach achieves a 1.5-1.9x speed-up and1.8-2.5x lower energy consumption compared to beam sampling, with no loss in downstream performance. Moreover, it can produce significantly higher-quality outputs than speculative decoding, while maintaining similar time, memory, and energy costs. In summary, our method offers a more efficient and effective inference process for LLMs.

Cite

Text

Qin et al. "Dynamic-Width Speculative Beam Decoding for LLM Inference." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I23.34690

Markdown

[Qin et al. "Dynamic-Width Speculative Beam Decoding for LLM Inference." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/qin2025aaai-dynamic/) doi:10.1609/AAAI.V39I23.34690

BibTeX

@inproceedings{qin2025aaai-dynamic,
  title     = {{Dynamic-Width Speculative Beam Decoding for LLM Inference}},
  author    = {Qin, Zongyue and He, Zifan and Prakriya, Neha and Cong, Jason and Sun, Yizhou},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {25056-25064},
  doi       = {10.1609/AAAI.V39I23.34690},
  url       = {https://mlanthology.org/aaai/2025/qin2025aaai-dynamic/}
}