Task-Specific Zero-Shot Quantization-Aware Training for Object Detection

Abstract

Quantization is a key technique to reduce network size and computational complexity by representing the network parameters with a lower precision. Traditional quantization methods rely on access to original training data, which is often restricted due to privacy concerns or security challenges. Zero-shot Quantization (ZSQ) addresses this by using synthetic data generated from pre-trained models, eliminating the need for real training data.Recently, ZSQ has been extended to object detection. However, existing methods use unlabeled task-agnostic synthetic images that lack the specific information required for object detection, leading to suboptimal performance. In this paper, we propose a novel task-specific ZSQ framework for object detection networks, which consists of two main stages. First, we introduce a bounding box and category sampling strategy to synthesize a task-specific calibration set from the pre-trained network, reconstructing object locations, sizes, and category distributions without any prior knowledge. Second, we integrate task-specific training into the knowledge distillation process to restore the performance of quantized detection networks.Extensive experiments conducted on the MS-COCO and Pascal VOC datasets demonstrate the efficiency and state-of-the-art performance of our method.

Cite

Text

Li et al. "Task-Specific Zero-Shot Quantization-Aware Training for Object Detection." International Conference on Computer Vision, 2025.

Markdown

[Li et al. "Task-Specific Zero-Shot Quantization-Aware Training for Object Detection." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/li2025iccv-taskspecific/)

BibTeX

@inproceedings{li2025iccv-taskspecific,
  title     = {{Task-Specific Zero-Shot Quantization-Aware Training for Object Detection}},
  author    = {Li, Changhao and Chen, Xinrui and Wang, Ji and Zhao, Kang and Chen, Jianfei},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {22868-22878},
  url       = {https://mlanthology.org/iccv/2025/li2025iccv-taskspecific/}
}