Robustness Against Gradient Based Attacks Through Cost Effective Network Fine-Tuning

Abstract

Adversarial perturbations aim to modify the image pixels in an imperceptible manner such that the CNN classifier misclassifies an image, whereas humans can predict the original class. Several defense algorithms against adversarial attacks are proposed in the literature, such as binary classification which aims to detect adversarial examples, and network retraining using perturbed images. The challenge with the adversarial detection approach is that once the perturbed samples are detected, they are discarded, and the system requires fresh input. On the other hand, adversarial training requires the generation of adversarial images for data augmentation and hence is computationally demanding. It is well known that training a deep CNN architecture is resource-intensive, and therefore retraining again from scratch is not feasible in resource-constrained scenarios. We propose computationally efficient fine-tuning of pre-trained networks to increase their robustness against the prevalent gradient-based attacks. The proposed finetuning is performed in a complete black-box fashion, where we do not know the training setting such as optimizer, batch size, and learning rate used in the training of the network. Extensive experiments using multiple CNN architectures such as VGG and ResNet show that the proposed fine-tuning provides significant robustness against various widespread gradient attacks.

Cite

Text

Agarwal et al. "Robustness Against Gradient Based Attacks Through Cost Effective Network Fine-Tuning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023. doi:10.1109/CVPRW59228.2023.00008

Markdown

[Agarwal et al. "Robustness Against Gradient Based Attacks Through Cost Effective Network Fine-Tuning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023.](https://mlanthology.org/cvprw/2023/agarwal2023cvprw-robustness/) doi:10.1109/CVPRW59228.2023.00008

BibTeX

@inproceedings{agarwal2023cvprw-robustness,
  title     = {{Robustness Against Gradient Based Attacks Through Cost Effective Network Fine-Tuning}},
  author    = {Agarwal, Akshay and Ratha, Nalini K. and Singh, Richa and Vatsa, Mayank},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2023},
  pages     = {28-37},
  doi       = {10.1109/CVPRW59228.2023.00008},
  url       = {https://mlanthology.org/cvprw/2023/agarwal2023cvprw-robustness/}
}