Learning Lightweight Neural Networks via Channel-Split Recurrent Convolution
Abstract
Lightweight neural networks refer to deep networks with small numbers of parameters, which are allowed to be implemented in resource-limited hardware such as embedded systems. To learn such lightweight networks effectively and efficiently, in this paper we propose a novel convolutional layer, namely Channel-Split Recurrent Convolution (CSR-Conv), where we split the output channels to generate data sequences with length T as the input to the recurrent layers with shared weights. As a consequence, we can construct lightweight convolutional networks by simply replacing (some) linear convolutional layers with CSR-Conv layers. We prove that under mild conditions the model size decreases with the rate of O(1 / T^2). Empirically we demonstrate the state-of-the-art performance using VGG-16, ResNet-50, ResNet-56, ResNet-110, DenseNet-40, MobileNet, and EfficientNet as backbone networks on CIFAR-10 and ImageNet. Codes can be found on https://github.com/tuaxon/CSR_Conv.
Cite
Text
Wu et al. "Learning Lightweight Neural Networks via Channel-Split Recurrent Convolution." Winter Conference on Applications of Computer Vision, 2023.Markdown
[Wu et al. "Learning Lightweight Neural Networks via Channel-Split Recurrent Convolution." Winter Conference on Applications of Computer Vision, 2023.](https://mlanthology.org/wacv/2023/wu2023wacv-learning/)BibTeX
@inproceedings{wu2023wacv-learning,
title = {{Learning Lightweight Neural Networks via Channel-Split Recurrent Convolution}},
author = {Wu, Guojun and Zhang, Xin and Zhang, Ziming and Li, Yanhua and Zhou, Xun and Brinton, Christopher and Liu, Zhenming},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2023},
pages = {3858-3868},
url = {https://mlanthology.org/wacv/2023/wu2023wacv-learning/}
}