Confidence-Aware Denoised Fine-Tuning of Off-the-Shelf Models for Certified Robustness
Abstract
The remarkable advances in deep learning have led to the emergence of many off-the-shelf classifiers, e.g., large pre-trained models. However, since they are typically trained on clean data, they remain vulnerable to adversarial attacks. Despite this vulnerability, their superior performance and transferability make off-the-shelf classifiers still valuable in practice, demanding further work to provide adversarial robustness for them in a post-hoc manner. A recently proposed method, denoised smoothing, leverages a denoiser model in front of the classifier to obtain provable robustness without additional training. However, the denoiser often creates hallucination, i.e., images that have lost the semantics of their originally assigned class, leading to a drop in robustness. Furthermore, its noise-and-denoise procedure introduces a significant distribution shift from the original distribution, causing the denoised smoothing framework to achieve sub-optimal robustness. In this paper, we introduce Fine-Tuning with Confidence-Aware Denoised Image Selection (FT-CADIS), a novel fine-tuning scheme to enhance the certified robustness of off-the-shelf classifiers. FT-CADIS is inspired by the observation that the confidence of off-the-shelf classifiers can effectively identify hallucinated images during denoised smoothing. Based on this, we develop a confidence-aware training objective to handle such hallucinated images and improve the stability of fine-tuning from denoised images. In this way, the classifier can be fine-tuned using only images that are beneficial for adversarial robustness. We also find that such a fine-tuning can be done by merely updating a small fraction (i.e., 1%) of parameters of the classifier. Extensive experiments demonstrate that FT-CADIS has established the state-of-the-art certified robustness among denoised smoothing methods across all $l_2$-adversary radius in a variety of benchmarks, such as CIFAR-10 and ImageNet.
Cite
Text
Jang et al. "Confidence-Aware Denoised Fine-Tuning of Off-the-Shelf Models for Certified Robustness." Transactions on Machine Learning Research, 2024.Markdown
[Jang et al. "Confidence-Aware Denoised Fine-Tuning of Off-the-Shelf Models for Certified Robustness." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/jang2024tmlr-confidenceaware/)BibTeX
@article{jang2024tmlr-confidenceaware,
title = {{Confidence-Aware Denoised Fine-Tuning of Off-the-Shelf Models for Certified Robustness}},
author = {Jang, Suhyeok and Kim, Seojin and Shin, Jinwoo and Jeong, Jongheon},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/jang2024tmlr-confidenceaware/}
}