Loss-Aware Weight Quantization of Deep Networks
Abstract
The huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling parameters for the positive and negative weights, and m-bit (where m > 2) quantization. Experiments on feedforward and recurrent neural networks show that the proposed scheme outperforms state-of-the-art weight quantization algorithms, and is as accurate (or even more accurate) than the full-precision network.
Cite
Text
Hou and Kwok. "Loss-Aware Weight Quantization of Deep Networks." International Conference on Learning Representations, 2018.Markdown
[Hou and Kwok. "Loss-Aware Weight Quantization of Deep Networks." International Conference on Learning Representations, 2018.](https://mlanthology.org/iclr/2018/hou2018iclr-lossaware/)BibTeX
@inproceedings{hou2018iclr-lossaware,
title = {{Loss-Aware Weight Quantization of Deep Networks}},
author = {Hou, Lu and Kwok, James T.},
booktitle = {International Conference on Learning Representations},
year = {2018},
url = {https://mlanthology.org/iclr/2018/hou2018iclr-lossaware/}
}