Up or Down? Adaptive Rounding for Post-Training Quantization

Abstract

When quantizing neural networks, assigning each floating-point weight to its nearest fixed-point value is the predominant approach. We find that, perhaps surprisingly, this is not the best we can do. In this paper, we propose AdaRound, a better weight-rounding mechanism for post-training quantization that adapts to the data and the task loss. AdaRound is fast, does not require fine-tuning of the network, and only uses a small amount of unlabelled data. We start by theoretically analyzing the rounding problem for a pre-trained neural network. By approximating the task loss with a Taylor series expansion, the rounding task is posed as a quadratic unconstrained binary optimization problem. We simplify this to a layer-wise local loss and propose to optimize this loss with a soft relaxation. AdaRound not only outperforms rounding-to-nearest by a significant margin but also establishes a new state-of-the-art for post-training quantization on several networks and tasks. Without fine-tuning, we can quantize the weights of Resnet18 and Resnet50 to 4 bits while staying within an accuracy loss of 1%.

Cite

Text

Nagel et al. "Up or Down? Adaptive Rounding for Post-Training Quantization." International Conference on Machine Learning, 2020.

Markdown

[Nagel et al. "Up or Down? Adaptive Rounding for Post-Training Quantization." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/nagel2020icml-up/)

BibTeX

@inproceedings{nagel2020icml-up,
  title     = {{Up or Down? Adaptive Rounding for Post-Training Quantization}},
  author    = {Nagel, Markus and Amjad, Rana Ali and Van Baalen, Mart and Louizos, Christos and Blankevoort, Tijmen},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {7197-7206},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/nagel2020icml-up/}
}