Learning to Prune Filters in Convolutional Neural Networks
Abstract
Many state-of-the-art computer vision algorithms use large scale convolutional neural networks (CNNs) as basic building blocks. These CNNs are known for their huge number of parameters, high redundancy in weights, and tremendous computing resource consumptions. This paper presents a learning algorithm to simplify and speed up these CNNs. Specifically, we introduce a “try-and-learn” algorithm to train pruning agents that remove unnecessary CNN filters in a data-driven way. With the help of a novel reward function, our agents removes a significant number of filters in CNNs while maintaining performance at a desired level. Moreover, this method provides an easy control of the tradeoff between network performance and its scale. Performance of our algorithm is validated with comprehensive pruning experiments on several popular CNNs for visual recognition and semantic segmentation tasks.
Cite
Text
Huang et al. "Learning to Prune Filters in Convolutional Neural Networks." IEEE/CVF Winter Conference on Applications of Computer Vision, 2018. doi:10.1109/WACV.2018.00083Markdown
[Huang et al. "Learning to Prune Filters in Convolutional Neural Networks." IEEE/CVF Winter Conference on Applications of Computer Vision, 2018.](https://mlanthology.org/wacv/2018/huang2018wacv-learning/) doi:10.1109/WACV.2018.00083BibTeX
@inproceedings{huang2018wacv-learning,
title = {{Learning to Prune Filters in Convolutional Neural Networks}},
author = {Huang, Qiangui and Zhou, Shaohua Kevin and You, Suya and Neumann, Ulrich},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2018},
pages = {709-718},
doi = {10.1109/WACV.2018.00083},
url = {https://mlanthology.org/wacv/2018/huang2018wacv-learning/}
}