Generative Uncertainty in Diffusion Models
Abstract
Diffusion and flow matching models have recently driven significant breakthroughs in generative modeling. While state-of-the-art models produce high-quality samples on average, individual samples can still be low quality. Detecting such samples without human inspection remains a challenging task. To address this, we propose a Bayesian framework for estimating the generative uncertainty of synthetic samples. We outline how to make Bayesian inference practical for large, modern generative models and introduce a new semantic likelihood to address the challenges posed by high-dimensional sample spaces. Through our experiments, we demonstrate that the proposed generative uncertainty effectively identifies poor-quality samples and significantly outperforms existing uncertainty-based methods. Notably, our Bayesian framework can be applied post-hoc to any pretrained diffusion or flow matching model (via the Laplace approximation), and we propose simple yet effective techniques to minimize its computational overhead during sampling.
Cite
Text
Jazbec et al. "Generative Uncertainty in Diffusion Models." ICLR 2025 Workshops: QUESTION, 2025.Markdown
[Jazbec et al. "Generative Uncertainty in Diffusion Models." ICLR 2025 Workshops: QUESTION, 2025.](https://mlanthology.org/iclrw/2025/jazbec2025iclrw-generative/)BibTeX
@inproceedings{jazbec2025iclrw-generative,
title = {{Generative Uncertainty in Diffusion Models}},
author = {Jazbec, Metod and Wong-Toi, Eliot and Xia, Guoxuan and Zhang, Dan and Nalisnick, Eric and Mandt, Stephan},
booktitle = {ICLR 2025 Workshops: QUESTION},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/jazbec2025iclrw-generative/}
}