CertViT: Certified Robustness of Pre-Trained Vision Transformers
Abstract
Lipschitz bounded neural networks are certifiably robust and have a good trade-off between clean and certified accuracy. Existing Lipschitz bounding methods train from scratch and are limited to moderately sized networks (< 6M parameters). They require a fair amount of hyper-parameter tuning and are computationally prohibitive for large networks like Vision Transformers (5M to 660M parameters). Obtaining certified robustness of transformers is not feasible due to the non-scalability and inflexibility of the current methods. This work presents CertViT, a two-step proximal-projection method to achieve certified robustness from pre-trained weights. The proximal step tries to lower the Lipschitz bound and the projection step tries to maintain the clean accuracy of pre-trained weights. We show that CertViT networks have better certified accuracy than state-of-the-art Lipschitz trained networks. We apply CertViT on several variants of pre-trained vision transformers and show adversarial robustness using standard attacks. Code : \url{https://github.com/sagarverma/transformer-lipschitz}
Cite
Text
Gupta and Verma. "CertViT: Certified Robustness of Pre-Trained Vision Transformers." ICML 2023 Workshops: AdvML-Frontiers, 2023.Markdown
[Gupta and Verma. "CertViT: Certified Robustness of Pre-Trained Vision Transformers." ICML 2023 Workshops: AdvML-Frontiers, 2023.](https://mlanthology.org/icmlw/2023/gupta2023icmlw-certvit/)BibTeX
@inproceedings{gupta2023icmlw-certvit,
title = {{CertViT: Certified Robustness of Pre-Trained Vision Transformers}},
author = {Gupta, Kavya and Verma, Sagar},
booktitle = {ICML 2023 Workshops: AdvML-Frontiers},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/gupta2023icmlw-certvit/}
}