Least Squares Binary Quantization of Neural Networks
Abstract
Quantizing weights and activations of deep neural networks results in significant improvement in inference efficiency at the cost of lower accuracy. A source of the accuracy gap between full precision and quantized models is the quantization error. In this work, we focus on the binary quantization, in which values are mapped to -1 and 1. We provide a unified framework to analyze different scaling strategies. Inspired by the pareto-optimality of 2-bits versus 1-bit quantization, we introduce a novel 2-bits quantization with provably least squares error. Our quantization algorithms can be implemented efficiently on the hardware using bitwise operations. We present proofs to show that our proposed methods are optimal, and also provide empirical error analysis. We conduct experiments on the ImageNet dataset and show a reduced accuracy gap when using the proposed least squares quantization algorithms.1
Cite
Text
Pouransari et al. "Least Squares Binary Quantization of Neural Networks." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. doi:10.1109/CVPRW50498.2020.00357Markdown
[Pouransari et al. "Least Squares Binary Quantization of Neural Networks." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.](https://mlanthology.org/cvprw/2020/pouransari2020cvprw-least/) doi:10.1109/CVPRW50498.2020.00357BibTeX
@inproceedings{pouransari2020cvprw-least,
title = {{Least Squares Binary Quantization of Neural Networks}},
author = {Pouransari, Hadi and Tu, Zhucheng and Tuzel, Oncel},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2020},
pages = {2986-2996},
doi = {10.1109/CVPRW50498.2020.00357},
url = {https://mlanthology.org/cvprw/2020/pouransari2020cvprw-least/}
}