NAS-DIP: Learning Deep Image Prior with Neural Architecture Search

Abstract

Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior for solving various inverse image restoration tasks. Instead of using hand-designed architectures, we propose to search for neural architectures that capture stronger image priors. Building upon a generic U-Net architecture, our core contribution lies in designing new search spaces for (1) an upsampling cell and (2) a pattern of cross-scale residual connections. We search for an improved network by leveraging an existing neural architecture search algorithm (using reinforcement learning with a recurrent neural network controller). We validate the effectiveness of our method via a wide variety of applications, including image restoration, dehazing, image-to-image translation, and matrix factorization. Extensive experimental results show that our algorithm performs favorably against state-of-the-art learning-free approaches and reaches competitive performance with existing learning-based methods in some cases.

Cite

Text

Chen et al. "NAS-DIP: Learning Deep Image Prior with Neural Architecture Search." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58523-5_26

Markdown

[Chen et al. "NAS-DIP: Learning Deep Image Prior with Neural Architecture Search." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/chen2020eccv-nasdip/) doi:10.1007/978-3-030-58523-5_26

BibTeX

@inproceedings{chen2020eccv-nasdip,
  title     = {{NAS-DIP: Learning Deep Image Prior with Neural Architecture Search}},
  author    = {Chen, Yun-Chun and Gao, Chen and Robb, Esther and Huang, Jia-Bin},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58523-5_26},
  url       = {https://mlanthology.org/eccv/2020/chen2020eccv-nasdip/}
}