Towards Certificated Model Robustness Against Weight Perturbations

Abstract

This work studies the sensitivity of neural networks to weight perturbations, firstly corresponding to a newly developed threat model that perturbs the neural network parameters. We propose an efficient approach to compute a certified robustness bound of weight perturbations, within which neural networks will not make erroneous outputs as desired by the adversary. In addition, we identify a useful connection between our developed certification method and the problem of weight quantization, a popular model compression technique in deep neural networks (DNNs) and a ‘must-try’ step in the design of DNN inference engines on resource constrained computing platforms, such as mobiles, FPGA, and ASIC. Specifically, we study the problem of weight quantization – weight perturbations in the non-adversarial setting – through the lens of certificated robustness, and we demonstrate significant improvements on the generalization ability of quantized networks through our robustness-aware quantization scheme.

Cite

Text

Weng et al. "Towards Certificated Model Robustness Against Weight Perturbations." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I04.6105

Markdown

[Weng et al. "Towards Certificated Model Robustness Against Weight Perturbations." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/weng2020aaai-certificated/) doi:10.1609/AAAI.V34I04.6105

BibTeX

@inproceedings{weng2020aaai-certificated,
  title     = {{Towards Certificated Model Robustness Against Weight Perturbations}},
  author    = {Weng, Tsui-Wei and Zhao, Pu and Liu, Sijia and Chen, Pin-Yu and Lin, Xue and Daniel, Luca},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {6356-6363},
  doi       = {10.1609/AAAI.V34I04.6105},
  url       = {https://mlanthology.org/aaai/2020/weng2020aaai-certificated/}
}