Deep Random Projector: Accelerated Deep Image Prior

Abstract

Deep image prior (DIP) has shown great promise in tackling a variety of image restoration (IR) and general visual inverse problems, needing no training data. However, the resulting optimization process is often very slow, inevitably hindering DIP's practical usage for time-sensitive scenarios. In this paper, we focus on IR, and propose two crucial modifications to DIP that help achieve substantial speedup: 1) optimizing the DIP seed while freezing randomly-initialized network weights, and 2) reducing the network depth. In addition, we reintroduce explicit priors, such as sparse gradient prior---encoded by total-variation regularization, to preserve the DIP peak performance. We evaluate the proposed method on three IR tasks, including image denoising, image super-resolution, and image inpainting, against the original DIP and variants, as well as the competing metaDIP that uses meta-learning to learn good initializers with extra data. Our method is a clear winner in obtaining competitive restoration quality in a minimal amount of time. Our code is available at https://github.com/sun-umn/Deep-Random-Projector.

Cite

Text

Li et al. "Deep Random Projector: Accelerated Deep Image Prior." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.01743

Markdown

[Li et al. "Deep Random Projector: Accelerated Deep Image Prior." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/li2023cvpr-deep/) doi:10.1109/CVPR52729.2023.01743

BibTeX

@inproceedings{li2023cvpr-deep,
  title     = {{Deep Random Projector: Accelerated Deep Image Prior}},
  author    = {Li, Taihui and Wang, Hengkang and Zhuang, Zhong and Sun, Ju},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {18176-18185},
  doi       = {10.1109/CVPR52729.2023.01743},
  url       = {https://mlanthology.org/cvpr/2023/li2023cvpr-deep/}
}