Removing Structured Noise Using Diffusion Models
Abstract
Solving ill-posed inverse problems requires careful formulation of prior beliefs over the signals of interest and an accurate description of their manifestation into noisy measurements. Handcrafted signal priors based on e.g. sparsity are increasingly replaced by data-driven deep generative models, and several groups have recently shown that state-of-the-art score-based diffusion models yield particularly strong performance and flexibility. In this paper, we show that the powerful paradigm of posterior sampling with diffusion models can be extended to include rich, structured, noise models. To that end, we propose a joint conditional reverse diffusion process with learned scores for the noise and signal-generating distribution. We demonstrate strong performance gains across various inverse problems with structured noise, outperforming competitive baselines using normalizing flows, adversarial networks and various posterior sampling methods for diffusion models. This opens up new opportunities and relevant practical applications of diffusion modeling for inverse problems in the context of non-Gaussian measurement models.
Cite
Text
Stevens et al. "Removing Structured Noise Using Diffusion Models." Transactions on Machine Learning Research, 2025.Markdown
[Stevens et al. "Removing Structured Noise Using Diffusion Models." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/stevens2025tmlr-removing/)BibTeX
@article{stevens2025tmlr-removing,
title = {{Removing Structured Noise Using Diffusion Models}},
author = {Stevens, Tristan and van Gorp, Hans and Meral, Faik C and Shin, Junseob and Yu, Jason and Robert, Jean-luc and Van Sloun, Ruud},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/stevens2025tmlr-removing/}
}