A Mixture-Based Framework for Guiding Diffusion Models
Abstract
Denoising diffusion models have driven significant progress in the field of Bayesian inverse problems. Recent approaches use pre-trained diffusion models as priors to solve a wide range of such problems, only leveraging inference-time compute and thereby eliminating the need to retrain task-specific models on the same dataset. To approximate the posterior of a Bayesian inverse problem, a diffusion model samples from a sequence of intermediate posterior distributions, each with an intractable likelihood function. This work proposes a novel mixture approximation of these intermediate distributions. Since direct gradient-based sampling of these mixtures is infeasible due to intractable terms, we propose a practical method based on Gibbs sampling. We validate our approach through extensive experiments on image inverse problems, utilizing both pixel- and latent-space diffusion priors, as well as on source separation with an audio diffusion model. The code is available at https://www.github.com/badr-moufad/mgdm.
Cite
Text
Janati et al. "A Mixture-Based Framework for Guiding Diffusion Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Janati et al. "A Mixture-Based Framework for Guiding Diffusion Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/janati2025icml-mixturebased/)BibTeX
@inproceedings{janati2025icml-mixturebased,
title = {{A Mixture-Based Framework for Guiding Diffusion Models}},
author = {Janati, Yazid and Moufad, Badr and El Qassime, Mehdi Abou and Oliviero Durmus, Alain and Moulines, Eric and Olsson, Jimmy},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {26830-26876},
volume = {267},
url = {https://mlanthology.org/icml/2025/janati2025icml-mixturebased/}
}