Variance Reduction of Diffusion Model's Gradients with Taylor Approximation-Based Control Variate

Abstract

Score-based models, trained with denoising score matching, are remarkably effective in generating high dimensional data. However, the high variance of their training objective hinders optimisation. We attempt to reduce it with a control variate, derived via a $k$-th order Taylor expansion on the training objective and its gradient. We prove an equivalence between the two and demonstrate empirically the effectiveness of our approach on a low dimensional problem setting; and study its effect on larger problems.

Cite

Text

Jeha et al. "Variance Reduction of Diffusion Model's Gradients with Taylor Approximation-Based Control Variate." ICML 2024 Workshops: SPIGM, 2024.

Markdown

[Jeha et al. "Variance Reduction of Diffusion Model's Gradients with Taylor Approximation-Based Control Variate." ICML 2024 Workshops: SPIGM, 2024.](https://mlanthology.org/icmlw/2024/jeha2024icmlw-variance/)

BibTeX

@inproceedings{jeha2024icmlw-variance,
  title     = {{Variance Reduction of Diffusion Model's Gradients with Taylor Approximation-Based Control Variate}},
  author    = {Jeha, Paul and Grathwohl, Will Sussman and Andersen, Michael Riis and Ek, Carl Henrik and Frellsen, Jes},
  booktitle = {ICML 2024 Workshops: SPIGM},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/jeha2024icmlw-variance/}
}