On the Sparsity of Image Super-Resolution Network
Abstract
The over parameterization of neural networks has been widely concerned for a long time. This gives us the opportunity to find a sub-networks that can improve the parameter efficiency of neural networks from a over parameterized network. In our study, we used EDSR as the backbone network to explore the parameter efficiency in super-resolution(SR) networks in the form of sparsity. Specifically, we search for sparse sub-networks at the two granularity of weight and kernel through various methods, and analyze the relationship between the structure and performance of the sub-networks. (1) We observe the ``Lottery Ticket Hypothesis'' from a new perspective in the regression task of SR on weight granularity. (2) On convolution kernel granularity, we apply several methods to explore the influence of different sparse sub-networks on network performance and found that based on certain rules, the performance of different sub-networks rarely depends on their structures. (3) We propose a very convenient width-sparsity method on convolution kernel granularity, which can improve the parameter utilization efficiency of most SR networks.
Cite
Text
Dong et al. "On the Sparsity of Image Super-Resolution Network." NeurIPS 2022 Workshops: ICBINB, 2022.Markdown
[Dong et al. "On the Sparsity of Image Super-Resolution Network." NeurIPS 2022 Workshops: ICBINB, 2022.](https://mlanthology.org/neuripsw/2022/dong2022neuripsw-sparsity/)BibTeX
@inproceedings{dong2022neuripsw-sparsity,
title = {{On the Sparsity of Image Super-Resolution Network}},
author = {Dong, Chenyu and Ma, Hailong and Gu, Jinjin and Zhang, Ruofan and Li, Jieming and Yuan, Chun},
booktitle = {NeurIPS 2022 Workshops: ICBINB},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/dong2022neuripsw-sparsity/}
}