Linear Convergence of Diffusion Models Under the Manifold Hypothesis

Abstract

Score-matching generative models have proven successful at sampling from complex high-dimensional data distributions. In many applications, this distribution is believed to concentrate on a much lower $d$-dimensional manifold embedded into $D$-dimensional space; this is known as the manifold hypothesis. The current best-known convergence guarantees are either linear in $D$ or polynomial (superlinear) in $d$. The latter exploits a novel integration scheme for the backward SDE. We take the best of both worlds and show that the number of steps diffusion models require in order to converge in Kullback-Leibler (KL) divergence is linear (up to logarithmic terms) in the intrinsic dimension $d$. Moreover, we show that this linear dependency is sharp.

Cite

Text

Potaptchik et al. "Linear Convergence of Diffusion Models Under the Manifold Hypothesis." Proceedings of Thirty Eighth Conference on Learning Theory, 2025.

Markdown

[Potaptchik et al. "Linear Convergence of Diffusion Models Under the Manifold Hypothesis." Proceedings of Thirty Eighth Conference on Learning Theory, 2025.](https://mlanthology.org/colt/2025/potaptchik2025colt-linear/)

BibTeX

@inproceedings{potaptchik2025colt-linear,
  title     = {{Linear Convergence of Diffusion Models Under the Manifold Hypothesis}},
  author    = {Potaptchik, Peter and Azangulov, Iskander and Deligiannidis, George},
  booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory},
  year      = {2025},
  pages     = {4668-4685},
  volume    = {291},
  url       = {https://mlanthology.org/colt/2025/potaptchik2025colt-linear/}
}