UDNet: Up-Down Network for Compact and Efficient Feature Representation in Image Super-Resolution

Abstract

Recently, image super-resolution (SR) using convolutional neural networks (CNNs) have achieved remarkable performance. However, there is a tradeoff between performance and speed of SR, depending on whether feature representation and learning are conducted in high-resolution (HR) or low-resolution (LR) space. Generally, to pursue real-time SR, the number of parameters in CNNs has to be restricted, which results in performance degradation. In this paper, we propose a compact and efficient feature representation for real-time SR, named up-down network (UD-Net). Specifically, a novel hourglass-shape structure is introduced by combining transposed convolution and spatial aggregation. This structure enables the network to transfer the feature representations between LR and HR spaces multiple times to learn a better mapping. Comprehensive experiments demonstrate that, compared with existing CNN models, UDNet achieves real-time SR without performance degradation on widely used benchmarks.

Cite

Text

Chen et al. "UDNet: Up-Down Network for Compact and Efficient Feature Representation in Image Super-Resolution." IEEE/CVF International Conference on Computer Vision Workshops, 2017. doi:10.1109/ICCVW.2017.130

Markdown

[Chen et al. "UDNet: Up-Down Network for Compact and Efficient Feature Representation in Image Super-Resolution." IEEE/CVF International Conference on Computer Vision Workshops, 2017.](https://mlanthology.org/iccvw/2017/chen2017iccvw-udnet/) doi:10.1109/ICCVW.2017.130

BibTeX

@inproceedings{chen2017iccvw-udnet,
  title     = {{UDNet: Up-Down Network for Compact and Efficient Feature Representation in Image Super-Resolution}},
  author    = {Chen, Chang and Tian, Xinmei and Wu, Feng and Xiong, Zhiwei},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2017},
  pages     = {1069-1076},
  doi       = {10.1109/ICCVW.2017.130},
  url       = {https://mlanthology.org/iccvw/2017/chen2017iccvw-udnet/}
}