Plug-and-Play Methods Provably Converge with Properly Trained Denoisers
Abstract
Plug-and-play (PnP) is a non-convex framework that integrates modern denoising priors, such as BM3D or deep learning-based denoisers, into ADMM or other proximal algorithms. An advantage of PnP is that one can use pre-trained denoisers when there is not sufficient data for end-to-end training. Although PnP has been recently studied extensively with great empirical success, theoretical analysis addressing even the most basic question of convergence has been insufficient. In this paper, we theoretically establish convergence of PnP-FBS and PnP-ADMM, without using diminishing stepsizes, under a certain Lipschitz condition on the denoisers. We then propose real spectral normalization, a technique for training deep learning-based denoisers to satisfy the proposed Lipschitz condition. Finally, we present experimental results validating the theory.
Cite
Text
Ryu et al. "Plug-and-Play Methods Provably Converge with Properly Trained Denoisers." International Conference on Machine Learning, 2019.Markdown
[Ryu et al. "Plug-and-Play Methods Provably Converge with Properly Trained Denoisers." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/ryu2019icml-plugandplay/)BibTeX
@inproceedings{ryu2019icml-plugandplay,
title = {{Plug-and-Play Methods Provably Converge with Properly Trained Denoisers}},
author = {Ryu, Ernest and Liu, Jialin and Wang, Sicheng and Chen, Xiaohan and Wang, Zhangyang and Yin, Wotao},
booktitle = {International Conference on Machine Learning},
year = {2019},
pages = {5546-5557},
volume = {97},
url = {https://mlanthology.org/icml/2019/ryu2019icml-plugandplay/}
}