Less Is More: Towards Compact CNNs
Abstract
To attain a favorable performance on large-scale datasets, convolutional neural networks (CNNs) are usually designed to have very high capacity involving millions of parameters. In this work, we aim at optimizing the number of neurons in a network, thus the number of parameters. We show that, by incorporating sparse constraints into the objective function, it is possible to decimate the number of neurons during the training stage. As a result, the number of parameters and the memory footprint of the neural network are also reduced, which is also desirable at the test time. We evaluated our method on several well-known CNN structures including AlexNet, and VGG over different datasets including ImageNet. Extensive experimental results demonstrate that our method leads to compact networks. Taking first fully connected layer as an example, our compact CNN contains only $30\,\%$ 30 % of the original neurons without any degradation of the top-1 classification accuracy.
Cite
Text
Zhou et al. "Less Is More: Towards Compact CNNs." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46493-0_40Markdown
[Zhou et al. "Less Is More: Towards Compact CNNs." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/zhou2016eccv-less/) doi:10.1007/978-3-319-46493-0_40BibTeX
@inproceedings{zhou2016eccv-less,
title = {{Less Is More: Towards Compact CNNs}},
author = {Zhou, Hao and Alvarez, Jose M. and Porikli, Fatih},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {662-677},
doi = {10.1007/978-3-319-46493-0_40},
url = {https://mlanthology.org/eccv/2016/zhou2016eccv-less/}
}