Adversarial Robust Model Compression Using In-Train Pruning

Abstract

Efficiently deploying learning-based systems on embedded hardware is challenging for various reasons, two of which are considered in this paper: The model’s size and its robustness against attacks. Both need to be addressed even-handedly. We combine adversarial training and model pruning in a joint formulation of the fundamental learning objective during training. Unlike existing post-train pruning approaches, our method does not use heuristics and eliminates the need for a pre-trained model. This allows for a classifier which is robust against attacks and enables better compression of the model, reducing its computational effort. In comparison to prior work, our approach yields 6.21 pp higher accuracy for an 85 % reduction in parameters for ResNet20 on the CIFAR-10 dataset.

Cite

Text

Vemparala et al. "Adversarial Robust Model Compression Using In-Train Pruning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021. doi:10.1109/CVPRW53098.2021.00016

Markdown

[Vemparala et al. "Adversarial Robust Model Compression Using In-Train Pruning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021.](https://mlanthology.org/cvprw/2021/vemparala2021cvprw-adversarial/) doi:10.1109/CVPRW53098.2021.00016

BibTeX

@inproceedings{vemparala2021cvprw-adversarial,
  title     = {{Adversarial Robust Model Compression Using In-Train Pruning}},
  author    = {Vemparala, Manoj Rohit and Fasfous, Nael and Frickenstein, Alexander and Sarkar, Sreetama and Zhao, Qi and Kuhn, Sabine and Frickenstein, Lukas and Singh, Anmol and Unger, Christian and Nagaraja, Naveen Shankar and Wressnegger, Christian and Stechele, Walter},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2021},
  pages     = {66-75},
  doi       = {10.1109/CVPRW53098.2021.00016},
  url       = {https://mlanthology.org/cvprw/2021/vemparala2021cvprw-adversarial/}
}