PARQ: Piecewise-Affine Regularized Quantization
Abstract
We develop a novel optimization method for quantization-aware training (QAT). Specifically, we show that convex, piecewise-affine regularization (PAR) can effectively induce neural network weights to cluster towards discrete values. We minimize PAR-regularized loss functions using an aggregate proximal stochastic gradient method (AProx) and prove that it enjoys last-iterate convergence. Our approach provides an interpretation of the straight-through estimator (STE), a widely used heuristic for QAT, as the asymptotic form of PARQ. We conduct experiments to demonstrate that PARQ obtains competitive performance on convolution- and transformer-based vision tasks.
Cite
Text
Jin et al. "PARQ: Piecewise-Affine Regularized Quantization." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Jin et al. "PARQ: Piecewise-Affine Regularized Quantization." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/jin2025icml-parq/)BibTeX
@inproceedings{jin2025icml-parq,
title = {{PARQ: Piecewise-Affine Regularized Quantization}},
author = {Jin, Lisa and Ma, Jianhao and Liu, Zechun and Gromov, Andrey and Defazio, Aaron and Xiao, Lin},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {28044-28062},
volume = {267},
url = {https://mlanthology.org/icml/2025/jin2025icml-parq/}
}