Subtensor Quantization for MobileNets
Abstract
Quantization for deep neural networks (DNN) have enabled developers to deploy models with less memory and more efficient low-power inference. However, not all DNN designs are friendly to quantization. For example, the popular Mobilenet architecture has been tuned to reduce parameter size and computational latency with separable depth-wise convolutions, but not all quantization algorithms work well and the accuracy can suffer against its float point versions. In this paper, we analyzed several root causes of quantization loss and proposed alternatives that do not rely on per-channel or training-aware approaches. We evaluate the image classification task on ImageNet dataset, and our post-training quantized 8-bit inference top-1 accuracy in within 0.7% of the floating point version.
Cite
Text
Dinh et al. "Subtensor Quantization for MobileNets." European Conference on Computer Vision Workshops, 2020. doi:10.1007/978-3-030-68238-5_10Markdown
[Dinh et al. "Subtensor Quantization for MobileNets." European Conference on Computer Vision Workshops, 2020.](https://mlanthology.org/eccvw/2020/dinh2020eccvw-subtensor/) doi:10.1007/978-3-030-68238-5_10BibTeX
@inproceedings{dinh2020eccvw-subtensor,
title = {{Subtensor Quantization for MobileNets}},
author = {Dinh, Thu and Melnikov, Andrey and Daskalopoulos, Vasilios and Chai, Sek},
booktitle = {European Conference on Computer Vision Workshops},
year = {2020},
pages = {126-130},
doi = {10.1007/978-3-030-68238-5_10},
url = {https://mlanthology.org/eccvw/2020/dinh2020eccvw-subtensor/}
}