NoUCSR: Efficient Super-Resolution Network Without Upsampling Convolution
Abstract
Deep learning approaches have been ubiquitous in single image super-resolution ever since the success of SRCNN. However, the superior performance is based on high requirement of computational resources, limiting the application of deep learning approaches in resource-constrained embedded and mobile devices. In this paper, we firstly show that the convolution layers in upsampling block are parameter-and computation-intensive. Secondly, we find that replacing upsampling convolution by concatenating different level features can reduce parameters and inference runtime significantly, while keeping same performance. Finally, we introduce an efficient model without upsampling convolution called NoUCSR, and present variant models optimizing parameter, inference runtime and performance respectively at the constraint of MSRResNet. The experiments show that NoUCSR can achieve a better tradeoff among parameter, inference runtime and performance than state-of-the-art methods.
Cite
Text
Xiong et al. "NoUCSR: Efficient Super-Resolution Network Without Upsampling Convolution." IEEE/CVF International Conference on Computer Vision Workshops, 2019. doi:10.1109/ICCVW.2019.00420Markdown
[Xiong et al. "NoUCSR: Efficient Super-Resolution Network Without Upsampling Convolution." IEEE/CVF International Conference on Computer Vision Workshops, 2019.](https://mlanthology.org/iccvw/2019/xiong2019iccvw-noucsr/) doi:10.1109/ICCVW.2019.00420BibTeX
@inproceedings{xiong2019iccvw-noucsr,
title = {{NoUCSR: Efficient Super-Resolution Network Without Upsampling Convolution}},
author = {Xiong, Dongliang and Huang, Kai and Chen, Siang and Li, Bowen and Jiang, Haitian and Xu, Wenyuan},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2019},
pages = {3378-3387},
doi = {10.1109/ICCVW.2019.00420},
url = {https://mlanthology.org/iccvw/2019/xiong2019iccvw-noucsr/}
}