Provably Convergent Data-Driven Convex-Nonconvex Regularization
Abstract
An emerging new paradigm for solving inverse problems is via the use of deep learning to learn a regularizer from data. This leads to high-quality results, but often at the cost of provable guarantees. In this work, we show how well-posedness and convergent regularisation arises within the convex-nonconvex (CNC) framework for inverse problems. We introduce a novel input weakly convex neural network (IWCNN) construction to adapt the method of learned adversarial regularization to the CNC framework. Empirically we show that our method overcomes numerical issues of previous adversarial methods.
Cite
Text
Shumaylov et al. "Provably Convergent Data-Driven Convex-Nonconvex Regularization." NeurIPS 2023 Workshops: Deep_Inverse, 2023.Markdown
[Shumaylov et al. "Provably Convergent Data-Driven Convex-Nonconvex Regularization." NeurIPS 2023 Workshops: Deep_Inverse, 2023.](https://mlanthology.org/neuripsw/2023/shumaylov2023neuripsw-provably/)BibTeX
@inproceedings{shumaylov2023neuripsw-provably,
title = {{Provably Convergent Data-Driven Convex-Nonconvex Regularization}},
author = {Shumaylov, Zakhar and Budd, Jeremy and Mukherjee, Subhadip and Schönlieb, Carola-Bibiane},
booktitle = {NeurIPS 2023 Workshops: Deep_Inverse},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/shumaylov2023neuripsw-provably/}
}