Make RepVGG Greater Again: A Quantization-Aware Approach

Abstract

The tradeoff between performance and inference speed is critical for practical applications. Architecture reparameterization obtains better tradeoffs and it is becoming an increasingly popular ingredient in modern convolutional neural networks. Nonetheless, its quantization performance is usually too poor to deploy (e.g. more than 20% top-1 accuracy drop on ImageNet) when INT8 inference is desired. In this paper, we dive into the underlying mechanism of this failure, where the original design inevitably enlarges quantization error. We propose a simple, robust, and effective remedy to have a quantization-friendly structure that also enjoys reparameterization benefits. Our method greatly bridges the gap between INT8 and FP32 accuracy for RepVGG. Without bells and whistles, the top-1 accuracy drop on ImageNet is reduced within 2% by standard post-training quantization. Extensive experiments on detection and semantic segmentation tasks verify its generalization.

Cite

Text

Chu et al. "Make RepVGG Greater Again: A Quantization-Aware Approach." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I10.29045

Markdown

[Chu et al. "Make RepVGG Greater Again: A Quantization-Aware Approach." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/chu2024aaai-make/) doi:10.1609/AAAI.V38I10.29045

BibTeX

@inproceedings{chu2024aaai-make,
  title     = {{Make RepVGG Greater Again: A Quantization-Aware Approach}},
  author    = {Chu, Xiangxiang and Li, Liang and Zhang, Bo},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {11624-11632},
  doi       = {10.1609/AAAI.V38I10.29045},
  url       = {https://mlanthology.org/aaai/2024/chu2024aaai-make/}
}