MultiQuant: Training Once for Multi-Bit Quantization of Neural Networks
Abstract
Quantization has become a popular technique to compress deep neural networks (DNNs) and reduce computational costs, but most prior work focuses on training DNNs at each individual fixed bit-width and accuracy trade-off point. How to produce a model with flexible precision is largely unexplored. This work proposes a multi-bit quantization framework (MultiQuant) to make the learned DNNs robust for different precision configuration during inference by adopting Lowest-Random-Highest bit-width co-training method. Meanwhile, we propose an online adaptive label generation strategy to alleviate the problem of vicious competition under different precision caused by one-hot labels in the supernet training. The trained supernet model can be flexibly set to different bit widths to support dynamic speed and accuracy trade-off. Furthermore, we adopt the Monte Carlo sampling-based genetic algorithm search strategy with quantization-aware accuracy predictor as evaluation criterion to incorporate the mixed precision technology in our framework. Experiment results on ImageNet datasets demonstrate MultiQuant method can attain the quantization results under different bit-widths comparable with quantization-aware training without retraining.
Cite
Text
Xu et al. "MultiQuant: Training Once for Multi-Bit Quantization of Neural Networks." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/504Markdown
[Xu et al. "MultiQuant: Training Once for Multi-Bit Quantization of Neural Networks." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/xu2022ijcai-multiquant/) doi:10.24963/IJCAI.2022/504BibTeX
@inproceedings{xu2022ijcai-multiquant,
title = {{MultiQuant: Training Once for Multi-Bit Quantization of Neural Networks}},
author = {Xu, Ke and Feng, Qiantai and Zhang, Xingyi and Wang, Dong},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2022},
pages = {3629-3635},
doi = {10.24963/IJCAI.2022/504},
url = {https://mlanthology.org/ijcai/2022/xu2022ijcai-multiquant/}
}