MAP Estimation with Denoisers: Convergence Rates and Guarantees

Abstract

Denoiser models have become powerful tools for inverse problems, enabling the use of pretrained networks to approximate the score of a smoothed prior distribution. These models are often used in heuristic iterative schemes aimed at solving Maximum a Posteriori (MAP) optimisation problems, where the proximal operator of the negative log-prior plays a central role. In practice, this operator is intractable, and practitioners plug in a pretrained denoiser as a surrogate—despite the lack of general theoretical justification for this substitution. In this work, we show that a simple algorithm, closely related to several used in practice, provably converges to the proximal operator under a log-concavity assumption on the prior $p$. We show that this algorithm can be interpreted as a gradient descent on smoothed proximal objectives. Our analysis thus provides a theoretical foundation for a class of empirically successful but previously heuristic methods

Cite

Text

Pesme et al. "MAP Estimation with Denoisers: Convergence Rates and Guarantees." Advances in Neural Information Processing Systems, 2025.

Markdown

[Pesme et al. "MAP Estimation with Denoisers: Convergence Rates and Guarantees." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/pesme2025neurips-map/)

BibTeX

@inproceedings{pesme2025neurips-map,
  title     = {{MAP Estimation with Denoisers: Convergence Rates and Guarantees}},
  author    = {Pesme, Scott and Meanti, Giacomo and Arbel, Michael and Mairal, Julien},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/pesme2025neurips-map/}
}