Generative Uncertainty in Diffusion Models
Abstract
Diffusion models have recently driven significant breakthroughs in generative modeling. While state-of-the-art models produce high-quality samples on average, individual samples can still be low quality. Detecting such samples without human inspection remains a challenging task. To address this, we propose a Bayesian framework for estimating generative uncertainty of synthetic samples. We outline how to make Bayesian inference practical for large, modern generative models and introduce a new semantic likelihood (evaluated in the latent space of a feature extractor) to address the challenges posed by high-dimensional sample spaces. Through our experiments, we demonstrate that the proposed generative uncertainty effectively identifies poor-quality samples and significantly outperforms existing uncertainty-based methods. Notably, our Bayesian framework can be applied post-hoc to any pretrained diffusion or flow matching model (via the Laplace approximation), and we propose simple yet effective techniques to minimize its computational overhead during sampling.
Cite
Text
Jazbec et al. "Generative Uncertainty in Diffusion Models." Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, 2025.Markdown
[Jazbec et al. "Generative Uncertainty in Diffusion Models." Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, 2025.](https://mlanthology.org/uai/2025/jazbec2025uai-generative/)BibTeX
@inproceedings{jazbec2025uai-generative,
title = {{Generative Uncertainty in Diffusion Models}},
author = {Jazbec, Metod and Wong-Toi, Eliot and Xia, Guoxuan and Zhang, Dan and Nalisnick, Eric and Mandt, Stephan},
booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence},
year = {2025},
pages = {1837-1858},
volume = {286},
url = {https://mlanthology.org/uai/2025/jazbec2025uai-generative/}
}