Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution

Abstract

Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image super-resolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy.

Cite

Text

Lai et al. "Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.618

Markdown

[Lai et al. "Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/lai2017cvpr-deep/) doi:10.1109/CVPR.2017.618

BibTeX

@inproceedings{lai2017cvpr-deep,
  title     = {{Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution}},
  author    = {Lai, Wei-Sheng and Huang, Jia-Bin and Ahuja, Narendra and Yang, Ming-Hsuan},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2017},
  doi       = {10.1109/CVPR.2017.618},
  url       = {https://mlanthology.org/cvpr/2017/lai2017cvpr-deep/}
}