A Deep Convolutional Neural Network with Selection Units for Super-Resolution

Abstract

Rectified linear units (ReLU) are known to be effective in many deep learning methods. Inspired by linear-mapping technique used in other super-resolution (SR) methods, we reinterpret ReLU into point-wise multiplication of an identity mapping and a switch, and finally present a novel nonlinear unit, called a selection unit (SU). While conventional ReLU has no direct control through which data is passed, the proposed SU optimizes this on-off switching control, and is therefore capable of better handling nonlinearity functionality than ReLU in a more flexible way. Our proposed deep network with SUs, called SelNet, was top-5 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">th</sup> ranked in NTIRE2017 Challenge, which has a much lower computation complexity compared to the top-4 entries. Further experiment results show that our proposed SelNet outperforms our baseline only with ReLU (without SUs), and other state-of-the-art deep-learning-based SR methods.

Cite

Text

Choi and Kim. "A Deep Convolutional Neural Network with Selection Units for Super-Resolution." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2017. doi:10.1109/CVPRW.2017.153

Markdown

[Choi and Kim. "A Deep Convolutional Neural Network with Selection Units for Super-Resolution." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2017.](https://mlanthology.org/cvprw/2017/choi2017cvprw-deep/) doi:10.1109/CVPRW.2017.153

BibTeX

@inproceedings{choi2017cvprw-deep,
  title     = {{A Deep Convolutional Neural Network with Selection Units for Super-Resolution}},
  author    = {Choi, Jae-Seok and Kim, Munchurl},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2017},
  pages     = {1150-1156},
  doi       = {10.1109/CVPRW.2017.153},
  url       = {https://mlanthology.org/cvprw/2017/choi2017cvprw-deep/}
}