ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions
Abstract
Convolutional neural networks (CNNs) have shown great capability of solving various artificial intelligence tasks. However, the increasing model size has raised challenges in employing them in resource-limited applications. In this work, we propose to compress deep models by using channel-wise convolutions, which replace dense connections among feature maps with sparse ones in CNNs. Based on this novel operation, we build light-weight CNNs known as ChannelNets. ChannelNets use three instances of channel-wise convolutions; namely group channel-wise convolutions, depth-wise separable channel-wise convolutions, and the convolutional classification layer. Compared to prior CNNs designed for mobile devices, ChannelNets achieve a significant reduction in terms of the number of parameters and computational cost without loss in accuracy. Notably, our work represents the first attempt to compress the fully-connected classification layer, which usually accounts for about 25% of total parameters in compact CNNs. Experimental results on the ImageNet dataset demonstrate that ChannelNets achieve consistently better performance compared to prior methods.
Cite
Text
Gao et al. "ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions." Neural Information Processing Systems, 2018.Markdown
[Gao et al. "ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/gao2018neurips-channelnets/)BibTeX
@inproceedings{gao2018neurips-channelnets,
title = {{ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions}},
author = {Gao, Hongyang and Wang, Zhengyang and Ji, Shuiwang},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {5197-5205},
url = {https://mlanthology.org/neurips/2018/gao2018neurips-channelnets/}
}