PerforatedCNNs: Acceleration Through Elimination of Redundant Convolutions
Abstract
We propose a novel approach to reduce the computational cost of evaluation of convolutional neural networks, a factor that has hindered their deployment in low-power devices such as mobile phones. Inspired by the loop perforation technique from source code optimization, we speed up the bottleneck convolutional layers by skipping their evaluation in some of the spatial positions. We propose and analyze several strategies of choosing these positions. We demonstrate that perforation can accelerate modern convolutional networks such as AlexNet and VGG-16 by a factor of 2x - 4x. Additionally, we show that perforation is complementary to the recently proposed acceleration method of Zhang et al.
Cite
Text
Figurnov et al. "PerforatedCNNs: Acceleration Through Elimination of Redundant Convolutions." Neural Information Processing Systems, 2016.Markdown
[Figurnov et al. "PerforatedCNNs: Acceleration Through Elimination of Redundant Convolutions." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/figurnov2016neurips-perforatedcnns/)BibTeX
@inproceedings{figurnov2016neurips-perforatedcnns,
title = {{PerforatedCNNs: Acceleration Through Elimination of Redundant Convolutions}},
author = {Figurnov, Mikhail and Ibraimova, Aizhan and Vetrov, Dmitry P and Kohli, Pushmeet},
booktitle = {Neural Information Processing Systems},
year = {2016},
pages = {947-955},
url = {https://mlanthology.org/neurips/2016/figurnov2016neurips-perforatedcnns/}
}