Gradient $\ell_1$ Regularization for Quantization Robustness
Abstract
We analyze the effect of quantizing weights and activations of neural networks on their loss and derive a simple regularization scheme that improves robustness against post-training quantization. By training quantization-ready networks, our approach enables storing a single set of weights that can be quantized on-demand to different bit-widths as energy and memory requirements of the application change. Unlike quantization-aware training using the straight-through estimator that only targets a specific bit-width and requires access to training data and pipeline, our regularization-based method paves the way for ``on the fly'' post-training quantization to various bit-widths. We show that by modeling quantization as a $\ell_\infty$-bounded perturbation, the first-order term in the loss expansion can be regularized using the $\ell_1$-norm of gradients. We experimentally validate our method on different vision architectures on CIFAR-10 and ImageNet datasets and show that the regularization of a neural network using our method improves robustness against quantization noise.
Cite
Text
Alizadeh et al. "Gradient $\ell_1$ Regularization for Quantization Robustness." International Conference on Learning Representations, 2020.Markdown
[Alizadeh et al. "Gradient $\ell_1$ Regularization for Quantization Robustness." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/alizadeh2020iclr-gradient/)BibTeX
@inproceedings{alizadeh2020iclr-gradient,
title = {{Gradient $\ell_1$ Regularization for Quantization Robustness}},
author = {Alizadeh, Milad and Behboodi, Arash and van Baalen, Mart and Louizos, Christos and Blankevoort, Tijmen and Welling, Max},
booktitle = {International Conference on Learning Representations},
year = {2020},
url = {https://mlanthology.org/iclr/2020/alizadeh2020iclr-gradient/}
}