Learnable Companding Quantization for Accurate Low-Bit Neural Networks
Abstract
Quantizing deep neural networks is an effective method for reducing memory consumption and improving inference speed, and is thus useful for implementation in resource-constrained devices. However, it is still hard for extremely low-bit models to achieve accuracy comparable with that of full-precision models. To address this issue, we propose learnable companding quantization (LCQ) as a novel non-uniform quantization method for 2-, 3-, and 4-bit models. LCQ jointly optimizes model weights and learnable companding functions that can flexibly and non-uniformly control the quantization levels of weights and activations. We also present a new weight normalization technique that allows more stable training for quantization. Experimental results show that LCQ outperforms conventional state-of-the-art methods and narrows the gap between quantized and full-precision models for image classification and object detection tasks. Notably, the 2-bit ResNet-50 model on ImageNet achieves top-1 accuracy of 75.1% and reduces the gap to 1.7%, allowing LCQ to further exploit the potential of non-uniform quantization.
Cite
Text
Yamamoto. "Learnable Companding Quantization for Accurate Low-Bit Neural Networks." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00499Markdown
[Yamamoto. "Learnable Companding Quantization for Accurate Low-Bit Neural Networks." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/yamamoto2021cvpr-learnable/) doi:10.1109/CVPR46437.2021.00499BibTeX
@inproceedings{yamamoto2021cvpr-learnable,
title = {{Learnable Companding Quantization for Accurate Low-Bit Neural Networks}},
author = {Yamamoto, Kohei},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2021},
pages = {5029-5038},
doi = {10.1109/CVPR46437.2021.00499},
url = {https://mlanthology.org/cvpr/2021/yamamoto2021cvpr-learnable/}
}