Convex Regularization Behind Neural Reconstruction
Abstract
Neural networks have shown tremendous potential for reconstructing high-resolution images in inverse problems. The non-convex and opaque nature of neural networks, however, hinders their utility in sensitive applications such as medical imaging. To cope with this challenge, this paper advocates a convex duality framework that makes a two-layer fully-convolutional ReLU denoising network amenable to convex optimization. The convex dual network not only offers the optimum training with convex solvers, but also facilitates interpreting training and prediction. In particular, it implies training neural networks with weight decay regularization induces path sparsity while the prediction is piecewise linear filtering. A range of experiments with MNIST and fastMRI datasets confirm the efficacy of the dual network optimization problem.
Cite
Text
Sahiner et al. "Convex Regularization Behind Neural Reconstruction." International Conference on Learning Representations, 2021.Markdown
[Sahiner et al. "Convex Regularization Behind Neural Reconstruction." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/sahiner2021iclr-convex/)BibTeX
@inproceedings{sahiner2021iclr-convex,
title = {{Convex Regularization Behind Neural Reconstruction}},
author = {Sahiner, Arda and Mardani, Morteza and Ozturkler, Batu and Pilanci, Mert and Pauly, John M.},
booktitle = {International Conference on Learning Representations},
year = {2021},
url = {https://mlanthology.org/iclr/2021/sahiner2021iclr-convex/}
}