Online Speculative Decoding
Abstract
Speculative decoding is a pivotal technique to accelerate the inference of large language models (LLMs) by employing a smaller draft model to predict the target model’s outputs. However, its efficacy can be limited due to the low predictive accuracy of the draft model, particularly when faced with diverse text inputs and a significant capability gap between the draft and target models. We introduce online speculative decoding to address this challenge. The main idea is to continuously update the (multiple) draft model(s) on observed user query data. Adapting to query distribution mitigates the shifts between the training distribution of the draft model and the query distribution, enabling the draft model to more accurately predict the target model’s outputs. We develop a prototype of online speculative decoding based on knowledge distillation and evaluate it using both synthetic and real query data. The results show a substantial increase in the token acceptance rate by 0.1 to 0.65, bringing 1.42x to 2.17x latency reduction. Our code is available at https://github.com/LiuXiaoxuanPKU/OSD.
Cite
Text
Liu et al. "Online Speculative Decoding." International Conference on Machine Learning, 2024.Markdown
[Liu et al. "Online Speculative Decoding." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/liu2024icml-online/)BibTeX
@inproceedings{liu2024icml-online,
title = {{Online Speculative Decoding}},
author = {Liu, Xiaoxuan and Hu, Lanxiang and Bailis, Peter and Cheung, Alvin and Deng, Zhijie and Stoica, Ion and Zhang, Hao},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {31131-31146},
volume = {235},
url = {https://mlanthology.org/icml/2024/liu2024icml-online/}
}