Accelerated Diffusion Models via Speculative Sampling

Abstract

Speculative sampling is a popular technique for accelerating inference in Large Language Models by generating candidate tokens using a fast draft model and then accepting or rejecting them based on the target model’s distribution. While speculative sampling was previously limited to discrete sequences, we extend it to diffusion models, which generate samples via continuous, vector-valued Markov chains. In this context, the target model is a high-quality but computationally expensive diffusion model. We propose various drafting strategies, including a simple and effective approach that does not require training a draft model and is applicable out-of-the-box to any diffusion model. We demonstrate significant generation speedup on various diffusion models, halving the number of function evaluations while generating exact samples from the target model. Finally, we also show how this procedure can be used to accelerate Langevin diffusions to sample unnormalized distributions.

Cite

Text

De Bortoli et al. "Accelerated Diffusion Models via Speculative Sampling." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[De Bortoli et al. "Accelerated Diffusion Models via Speculative Sampling." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/debortoli2025icml-accelerated/)

BibTeX

@inproceedings{debortoli2025icml-accelerated,
  title     = {{Accelerated Diffusion Models via Speculative Sampling}},
  author    = {De Bortoli, Valentin and Galashov, Alexandre and Gretton, Arthur and Doucet, Arnaud},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {12590-12631},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/debortoli2025icml-accelerated/}
}