Learning Convex Regularizers Satisfying the Variational Source Condition for Inverse Problems
Abstract
Variational regularization has remained one of the most successful approaches for reconstruction in imaging inverse problems for several decades. With the emergence and remarkable success of deep learning in recent years, a considerable amount of research has gone into data-driven modeling of the regularizer in the variational setting. Our work extends a recently proposed method, referred to as adversarial convex regularization (ACR), that seeks to learn a data-driven convex regularizer via adversarial training in an attempt to combine the power of data with the classical convex regularization theory. Specifically, we leverage the variational source condition (SC) during training to enforce that the ground-truth images minimize the variational loss corresponding to the learned convex regularizer. This is achieved by adding an appropriate penalty term to the ACR training objective. The resulting regularizer (abbreviated as ACR-SC) performs on par with standard ACR, but unlike ACR, comes with a quantitative convergence rate estimate.
Cite
Text
Mukherjee et al. "Learning Convex Regularizers Satisfying the Variational Source Condition for Inverse Problems." NeurIPS 2021 Workshops: Deep_Inverse, 2021.Markdown
[Mukherjee et al. "Learning Convex Regularizers Satisfying the Variational Source Condition for Inverse Problems." NeurIPS 2021 Workshops: Deep_Inverse, 2021.](https://mlanthology.org/neuripsw/2021/mukherjee2021neuripsw-learning/)BibTeX
@inproceedings{mukherjee2021neuripsw-learning,
title = {{Learning Convex Regularizers Satisfying the Variational Source Condition for Inverse Problems}},
author = {Mukherjee, Subhadip and Schönlieb, Carola-Bibiane and Burger, Martin},
booktitle = {NeurIPS 2021 Workshops: Deep_Inverse},
year = {2021},
url = {https://mlanthology.org/neuripsw/2021/mukherjee2021neuripsw-learning/}
}