Balanced Two-Stage Residual Networks for Image Super-Resolution

Abstract

In this paper, balanced two-stage residual networks (BTSRN) are proposed for single image super-resolution. The deep residual design with constrained depth achieves the optimal balance between the accuracy and the speed for super-resolving images. The experiments show that the balanced two-stage structure, together with our lightweight two-layer PConv residual block design, achieves very promising results when considering both accuracy and speed. We evaluated our models on the New Trends in Image Restoration and Enhancement workshop and challenge on image super-resolution (NTIRE SR 2017). Our final model with only 10 residual blocks ranked among the best ones in terms of not only accuracy (6th among 20 final teams) but also speed (2nd among top 6 teams in terms of accuracy). The source code both for training and evaluation is available in https://github.com/ychfan/sr_ntire2017.

Cite

Text

Fan et al. "Balanced Two-Stage Residual Networks for Image Super-Resolution." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2017. doi:10.1109/CVPRW.2017.154

Markdown

[Fan et al. "Balanced Two-Stage Residual Networks for Image Super-Resolution." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2017.](https://mlanthology.org/cvprw/2017/fan2017cvprw-balanced/) doi:10.1109/CVPRW.2017.154

BibTeX

@inproceedings{fan2017cvprw-balanced,
  title     = {{Balanced Two-Stage Residual Networks for Image Super-Resolution}},
  author    = {Fan, Yuchen and Shi, Honghui and Yu, Jiahui and Liu, Ding and Han, Wei and Yu, Haichao and Wang, Zhangyang and Wang, Xinchao and Huang, Thomas S.},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2017},
  pages     = {1157-1164},
  doi       = {10.1109/CVPRW.2017.154},
  url       = {https://mlanthology.org/cvprw/2017/fan2017cvprw-balanced/}
}