Sparse Transfer Learning Accelerates and Enhances Certified Robustness: A Comprehensive Study

Abstract

Certified robustness is a critical measure for assessing the reliability of machine learning systems. Traditionally, the computational burden associated with certifying the robustness of machine learning models has posed a substantial challenge, particularly with the continuous expansion of model sizes. In this paper, we introduce an innovative approach to expedite the verification process for $L_2$-norm certified robustness through sparse transfer learning. Our approach is both efficient and effective. It leverages verification results obtained from pre-training tasks and applies sparse updates to these results. To enhance performance, we incorporate dynamic sparse mask selection and introduce a novel stability-based regularizer called DiffStab. Empirical results demonstrate that our method accelerates the verification process for downstream tasks by as much as \textbf{70-80\%}, with only slight reductions in certified accuracy compared to dense parameter updates. We further validate that this performance improvement is even more pronounced in the few-shot transfer learning scenario.

Cite

Text

Li et al. "Sparse Transfer Learning Accelerates and Enhances Certified Robustness: A Comprehensive Study." NeurIPS 2024 Workshops: AdvML-Frontiers, 2024.

Markdown

[Li et al. "Sparse Transfer Learning Accelerates and Enhances Certified Robustness: A Comprehensive Study." NeurIPS 2024 Workshops: AdvML-Frontiers, 2024.](https://mlanthology.org/neuripsw/2024/li2024neuripsw-sparse/)

BibTeX

@inproceedings{li2024neuripsw-sparse,
  title     = {{Sparse Transfer Learning Accelerates and Enhances Certified Robustness: A Comprehensive Study}},
  author    = {Li, Zhangheng and Chen, Tianlong and Li, Linyi and Li, Bo and Wang, Zhangyang},
  booktitle = {NeurIPS 2024 Workshops: AdvML-Frontiers},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/li2024neuripsw-sparse/}
}