Learning Diffusion Priors from Observations by Expectation Maximization

Abstract

Diffusion models recently proved to be remarkable priors for Bayesian inverse problems. However, training these models typically requires access to large amounts of clean data, which could prove difficult in some settings. In this work, we present a novel method based on the expectation-maximization algorithm for training diffusion models from incomplete and noisy observations only. Unlike previous works, our method leads to proper diffusion models, which is crucial for downstream tasks. As part of our method, we propose and motivate an improved posterior sampling scheme for unconditional diffusion models. We present empirical evidence supporting the effectiveness of our method.

Cite

Text

Rozet et al. "Learning Diffusion Priors from Observations by Expectation Maximization." Neural Information Processing Systems, 2024. doi:10.52202/079017-2783

Markdown

[Rozet et al. "Learning Diffusion Priors from Observations by Expectation Maximization." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/rozet2024neurips-learning/) doi:10.52202/079017-2783

BibTeX

@inproceedings{rozet2024neurips-learning,
  title     = {{Learning Diffusion Priors from Observations by Expectation Maximization}},
  author    = {Rozet, François and Andry, Gérôme and Lanusse, François and Louppe, Gilles},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2783},
  url       = {https://mlanthology.org/neurips/2024/rozet2024neurips-learning/}
}