Automatic Joint Structured Pruning and Quantization for Efficient Neural Network Training and Compression

Abstract

Structured pruning and quantization are fundamental techniques used to reduce the size of deep neural networks (DNNs) and typically are applied independently. Applying these techniques jointly via co-optimization has the potential to produce smaller, high-quality models. However, existing joint schemes are not widely used because of (1) engineering difficulties (complicated multi-stage processes), (2) black-box optimization (extensive hyperparameter tuning to control the overall compression), and (3) insufficient architecture generalization. To address these limitations, we present the framework GETA, which automatically and efficiently performs joint structured pruning and quantization-aware training on any DNN. GETA introduces three key innovations: (I) a quantization-aware dependency graph (QADG) that constructs a pruning search space for generic quantization-aware DNN, (II) a partially projected stochastic gradient method that guarantees layer-wise bit constraints are satisfied, and (III) a new joint learning strategy that incorporates interpretable relationships between pruning and quantization. We present numerical experiments on both convolutional neural networks and transformer architectures that show that our approach achieves competitive (often superior) performance compared to existing joint pruning and quantization methods. The source code is available at https://github.com/microsoft/GETA.

Cite

Text

Qu et al. "Automatic Joint Structured Pruning and Quantization for Efficient Neural Network Training and Compression." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.01419

Markdown

[Qu et al. "Automatic Joint Structured Pruning and Quantization for Efficient Neural Network Training and Compression." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/qu2025cvpr-automatic/) doi:10.1109/CVPR52734.2025.01419

BibTeX

@inproceedings{qu2025cvpr-automatic,
  title     = {{Automatic Joint Structured Pruning and Quantization for Efficient Neural Network Training and Compression}},
  author    = {Qu, Xiaoyi and Aponte, David and Banbury, Colby and Robinson, Daniel P. and Ding, Tianyu and Koishida, Kazuhito and Zharkov, Ilya and Chen, Tianyi},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {15234-15244},
  doi       = {10.1109/CVPR52734.2025.01419},
  url       = {https://mlanthology.org/cvpr/2025/qu2025cvpr-automatic/}
}