Accelerating Convolutional Neural Networks with Dominant Convolutional Kernel and Knowledge Pre-Regression
Abstract
Aiming at accelerating the test time of deep convolutional neural networks (CNNs), we propose a model compression method that contains a novel dominant kernel (DK) and a new training method called knowledge pre-regression (KP). In the combined model DK $^2$ PNet, DK is presented to significantly accomplish a low-rank decomposition of convolutional kernels, while KP is employed to transfer knowledge of intermediate hidden layers from a larger teacher network to its compressed student network on the basis of a cross entropy loss function instead of previous Euclidean distance. Compared to the latest results, the experimental results achieved on CIFAR-10, CIFAR-100, MNIST, and SVHN benchmarks show that our DK $^2$ PNet method has the best performance in the light of being close to the state of the art accuracy and requiring dramatically fewer number of model parameters.
Cite
Text
Wang et al. "Accelerating Convolutional Neural Networks with Dominant Convolutional Kernel and Knowledge Pre-Regression." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46484-8_32Markdown
[Wang et al. "Accelerating Convolutional Neural Networks with Dominant Convolutional Kernel and Knowledge Pre-Regression." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/wang2016eccv-accelerating/) doi:10.1007/978-3-319-46484-8_32BibTeX
@inproceedings{wang2016eccv-accelerating,
title = {{Accelerating Convolutional Neural Networks with Dominant Convolutional Kernel and Knowledge Pre-Regression}},
author = {Wang, Zhenyang and Deng, Zhidong and Wang, Shiyao},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {533-548},
doi = {10.1007/978-3-319-46484-8_32},
url = {https://mlanthology.org/eccv/2016/wang2016eccv-accelerating/}
}