Improved Sample Complexity Bounds for Diffusion Model Training

Abstract

Diffusion models have become the most popular approach to deep generative modeling of images, largely due to their empirical performance and reliability. From a theoretical standpoint, a number of recent works [CCL+23, CCSW22, BBDD24] have studied the iteration complexity of sampling, assuming access to an accurate diffusion model. In this work, we focus on understanding the sample complexity of training such a model; how many samples are needed to learn an accurate diffusion model using a sufficiently expressive neural network? Prior work [BMR20] showed bounds polynomial in the dimension, desired Total Variation error, and Wasserstein error. We show an exponential improvement in the dependence on Wasserstein error and depth, along with improved dependencies on other relevant parameters.

Cite

Text

Gupta et al. "Improved Sample Complexity Bounds for Diffusion Model Training." Neural Information Processing Systems, 2024. doi:10.52202/079017-1296

Markdown

[Gupta et al. "Improved Sample Complexity Bounds for Diffusion Model Training." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/gupta2024neurips-improved/) doi:10.52202/079017-1296

BibTeX

@inproceedings{gupta2024neurips-improved,
  title     = {{Improved Sample Complexity Bounds for Diffusion Model Training}},
  author    = {Gupta, Shivam and Parulekar, Aditya and Price, Eric and Xun, Zhiyang},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-1296},
  url       = {https://mlanthology.org/neurips/2024/gupta2024neurips-improved/}
}