QFT: Post-Training Quantization via Fast Joint Finetuning of All Degrees of Freedom
Abstract
The post-training quantization (PTQ) challenge of bringing quantized neural net accuracy close to original has drawn much attention driven by industry demand. Many of the methods emphasize optimization of a specific per-layer degree of freedom (DoF), such as grid step size, preconditioning factors, nudges to weights and biases, often chained to others in multi-step solutions. Here we rethink quantized network parameterization in HW-aware fashion, towards a unified analysis of all quantization DoF, permitting for the first time their joint end-to-end finetuning. Our single-step simple and extendable method, dubbed quantization-aware finetuning (QFT), achieves 4b-weights quantization results on-par with SoTA within PTQ constraints of speed and resource.
Cite
Text
Finkelstein et al. "QFT: Post-Training Quantization via Fast Joint Finetuning of All Degrees of Freedom." European Conference on Computer Vision Workshops, 2022. doi:10.1007/978-3-031-25082-8_8Markdown
[Finkelstein et al. "QFT: Post-Training Quantization via Fast Joint Finetuning of All Degrees of Freedom." European Conference on Computer Vision Workshops, 2022.](https://mlanthology.org/eccvw/2022/finkelstein2022eccvw-qft/) doi:10.1007/978-3-031-25082-8_8BibTeX
@inproceedings{finkelstein2022eccvw-qft,
title = {{QFT: Post-Training Quantization via Fast Joint Finetuning of All Degrees of Freedom}},
author = {Finkelstein, Alexander and Fuchs, Ella and Tal, Idan and Grobman, Mark and Vosco, Niv and Meller, Eldad},
booktitle = {European Conference on Computer Vision Workshops},
year = {2022},
pages = {115-129},
doi = {10.1007/978-3-031-25082-8_8},
url = {https://mlanthology.org/eccvw/2022/finkelstein2022eccvw-qft/}
}