Improved Convergence Rate for Diffusion Probabilistic Models
Abstract
Score-based diffusion models have achieved remarkable empirical performance in the field of machine learning and artificial intelligence for their ability to generate high-quality new data instances from complex distributions. Improving our understanding of diffusion models, including mainly convergence analysis for such models, has attracted a lot of interests. Despite a lot of theoretical attempts, there still exists significant gap between theory and practice. Towards to close this gap, we establish an iteration complexity at the order of $d^{1/3}\varepsilon^{-2/3}$, which is better than $d^{5/12}\varepsilon^{-1}$, the best known complexity achieved before our work. This convergence analysis is based on a randomized midpoint method, which is first proposed for log-concave sampling (Shen & Lee, 2019), and then extended to diffusion models by Gupta et al. (2024). Our theory accommodates $\varepsilon$-accurate score estimates, and does not require log-concavity on the target distribution. Moreover, the algorithm can also be parallelized to run in only $O(\log^2(d/\varepsilon))$ parallel rounds in a similar way to prior works.
Cite
Text
Li and Jiao. "Improved Convergence Rate for Diffusion Probabilistic Models." International Conference on Learning Representations, 2025.Markdown
[Li and Jiao. "Improved Convergence Rate for Diffusion Probabilistic Models." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/li2025iclr-improved/)BibTeX
@inproceedings{li2025iclr-improved,
title = {{Improved Convergence Rate for Diffusion Probabilistic Models}},
author = {Li, Gen and Jiao, Yuchen},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/li2025iclr-improved/}
}