Qu-ANTI-Zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes
Abstract
Quantization is a popular technique that transforms the parameter representation of a neural network from floating-point numbers into lower-precision ones (e.g., 8-bit integers). It reduces the memory footprint and the computational cost at inference, facilitating the deployment of resource-hungry models. However, the parameter perturbations caused by this transformation result in behavioral disparities between the model before and after quantization. For example, a quantized model can misclassify some test-time samples that are otherwise classified correctly. It is not known whether such differences lead to a new security vulnerability. We hypothesize that an adversary may control this disparity to introduce specific behaviors that activate upon quantization. To study this hypothesis, we weaponize quantization-aware training and propose a new training framework to implement adversarial quantization outcomes. Following this framework, we present three attacks we carry out with quantization: (i) an indiscriminate attack for significant accuracy loss; (ii) a targeted attack against specific samples; and (iii) a backdoor attack for controlling the model with an input trigger. We further show that a single compromised model defeats multiple quantization schemes, including robust quantization techniques. Moreover, in a federated learning scenario, we demonstrate that a set of malicious participants who conspire can inject our quantization-activated backdoor. Lastly, we discuss potential counter-measures and show that only re-training consistently removes the attack artifacts. Our code is available at https://github.com/Secure-AI-Systems-Group/Qu-ANTI-zation
Cite
Text
Hong et al. "Qu-ANTI-Zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes." Neural Information Processing Systems, 2021.Markdown
[Hong et al. "Qu-ANTI-Zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/hong2021neurips-quantization/)BibTeX
@inproceedings{hong2021neurips-quantization,
title = {{Qu-ANTI-Zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes}},
author = {Hong, Sanghyun and Panaitescu-Liess, Michael-Andrei and Kaya, Yigitcan and Dumitras, Tudor},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/hong2021neurips-quantization/}
}