Diffusion Posterior Sampling Is Computationally Intractable
Abstract
Diffusion models are a remarkably effective way of learning and sampling from a distribution $p(x)$. In posterior sampling, one is also given a measurement model $p(y \mid x)$ and a measurement $y$, and would like to sample from $p(x \mid y)$. Posterior sampling is useful for tasks such as inpainting, super-resolution, and MRI reconstruction, so a number of recent works have given algorithms to heuristically approximate it; but none are known to converge to the correct distribution in polynomial time. In this paper we show that posterior sampling is computationally intractable: under the most basic assumption in cryptography—that one-way functions exist—there are instances for which every algorithm takes superpolynomial time, even though unconditional sampling is provably fast. We also show that the exponential-time rejection sampling algorithm is essentially optimal under the stronger plausible assumption that there are one-way functions that take exponential time to invert.
Cite
Text
Gupta et al. "Diffusion Posterior Sampling Is Computationally Intractable." International Conference on Machine Learning, 2024.Markdown
[Gupta et al. "Diffusion Posterior Sampling Is Computationally Intractable." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/gupta2024icml-diffusion/)BibTeX
@inproceedings{gupta2024icml-diffusion,
title = {{Diffusion Posterior Sampling Is Computationally Intractable}},
author = {Gupta, Shivam and Jalal, Ajil and Parulekar, Aditya and Price, Eric and Xun, Zhiyang},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {17020-17059},
volume = {235},
url = {https://mlanthology.org/icml/2024/gupta2024icml-diffusion/}
}