Gradient Guidance for Diffusion Models: An Optimization Perspective

Abstract

Diffusion models have demonstrated empirical successes in various applications and can be adapted to task-specific needs via guidance. This paper studies a form of gradient guidance for adapting a pre-trained diffusion model towards optimizing user-specified objectives. We establish a mathematical framework for guided diffusion to systematically study its optimization theory and algorithmic design. Our theoretical analysis spots a strong link between guided diffusion models and optimization: gradient-guided diffusion models are essentially sampling solutions to a regularized optimization problem, where the regularization is imposed by the pre-training data. As for guidance design, directly bringing in the gradient of an external objective function as guidance would jeopardize the structure in generated samples. We investigate a modified form of gradient guidance based on a forward prediction loss, which leverages the information in pre-trained score functions and provably preserves the latent structure. We further consider an iteratively fine-tuned version of gradient-guided diffusion where guidance and score network are both updated with newly generated samples. This process mimics a first-order optimization iteration in expectation, for which we proved $\tilde{\mathcal{O}}(1/K)$ convergence rate to the global optimum when the objective function is concave. Our code is released at https://github.com/yukang123/GGDMOptim.git.

Cite

Text

Guo et al. "Gradient Guidance for Diffusion Models: An Optimization Perspective." Neural Information Processing Systems, 2024. doi:10.52202/079017-2881

Markdown

[Guo et al. "Gradient Guidance for Diffusion Models: An Optimization Perspective." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/guo2024neurips-gradient/) doi:10.52202/079017-2881

BibTeX

@inproceedings{guo2024neurips-gradient,
  title     = {{Gradient Guidance for Diffusion Models: An Optimization Perspective}},
  author    = {Guo, Yingqing and Yuan, Hui and Yang, Yukang and Chen, Minshuo and Wang, Mengdi},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2881},
  url       = {https://mlanthology.org/neurips/2024/guo2024neurips-gradient/}
}