BayesDiff: Estimating Pixel-Wise Uncertainty in Diffusion via Bayesian Inference

Abstract

Diffusion models have impressive image generation capability, but low-quality generations still exist, and their identification remains challenging due to the lack of a proper sample-wise metric. To address this, we propose BayesDiff, a pixel-wise uncertainty estimator for generations from diffusion models based on Bayesian inference. In particular, we derive a novel uncertainty iteration principle to characterize the uncertainty dynamics in diffusion, and leverage the last-layer Laplace approximation for efficient Bayesian inference. The estimated pixel-wise uncertainty can not only be aggregated into a sample-wise metric to filter out low-fidelity images but also aids in augmenting successful generations and rectifying artifacts in failed generations in text-to-image tasks. Extensive experiments demonstrate the efficacy of BayesDiff and its promise for practical applications.

Cite

Text

Kou et al. "BayesDiff: Estimating Pixel-Wise Uncertainty in Diffusion via Bayesian Inference." International Conference on Learning Representations, 2024.

Markdown

[Kou et al. "BayesDiff: Estimating Pixel-Wise Uncertainty in Diffusion via Bayesian Inference." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/kou2024iclr-bayesdiff/)

BibTeX

@inproceedings{kou2024iclr-bayesdiff,
  title     = {{BayesDiff: Estimating Pixel-Wise Uncertainty in Diffusion via Bayesian Inference}},
  author    = {Kou, Siqi and Gan, Lei and Wang, Dequan and Li, Chongxuan and Deng, Zhijie},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/kou2024iclr-bayesdiff/}
}