Performance Guaranteed Network Acceleration via High-Order Residual Quantization
Abstract
Input binarization has shown to be an effective way for network acceleration. However, previous binarization scheme could be regarded as simple pixel-wise thresholding operations (i.e., order-one approximation) and suffers a big accuracy loss. In this paper, we propose a high-order binarization scheme, which achieves more accurate approximation while still possesses the advantage of binary operation. In particular, the proposed scheme recursively performs residual quantization and yields a series of binary input images with decreasing magnitude scales. Accordingly, we propose high-order binary filtering and gradient propagation operations for both forward and backward computations. Theoretical analysis shows approximation error guarantee property of proposed method. Extensive experimental results demonstrate that the proposed scheme yields great recognition accuracy while being accelerated.
Cite
Text
Li et al. "Performance Guaranteed Network Acceleration via High-Order Residual Quantization." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.282Markdown
[Li et al. "Performance Guaranteed Network Acceleration via High-Order Residual Quantization." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/li2017iccv-performance/) doi:10.1109/ICCV.2017.282BibTeX
@inproceedings{li2017iccv-performance,
title = {{Performance Guaranteed Network Acceleration via High-Order Residual Quantization}},
author = {Li, Zefan and Ni, Bingbing and Zhang, Wenjun and Yang, Xiaokang and Gao, Wen},
booktitle = {International Conference on Computer Vision},
year = {2017},
doi = {10.1109/ICCV.2017.282},
url = {https://mlanthology.org/iccv/2017/li2017iccv-performance/}
}