Robust Fine-Tuning of Zero-Shot Models

Abstract

Large pre-trained models such as CLIP or ALIGN offer consistent accuracy across a range of data distributions when performing zero-shot inference (i.e., without fine-tuning on a specific dataset). Although existing fine-tuning methods substantially improve accuracy on a given target distribution, they often reduce robustness to distribution shifts. We address this tension by introducing a simple and effective method for improving robustness while fine-tuning: ensembling the weights of the zero-shot and fine-tuned models (WiSE-FT). Compared to standard fine-tuning, WiSE-FT provides large accuracy improvements under distribution shift, while preserving high accuracy on the target distribution. On ImageNet and five derived distribution shifts, WiSE-FT improves accuracy under distribution shift by 4 to 6 percentage points (pp) over prior work while increasing ImageNet accuracy by 1.6 pp. WiSE-FT achieves similarly large robustness gains (2 to 23 pp) on a diverse set of six further distribution shifts, and accuracy gains of 0.8 to 3.3 pp compared to standard fine-tuning on commonly used transfer learning datasets. These improvements come at no additional computational cost during fine-tuning or inference.

Cite

Text

Wortsman et al. "Robust Fine-Tuning of Zero-Shot Models." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00780

Markdown

[Wortsman et al. "Robust Fine-Tuning of Zero-Shot Models." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/wortsman2022cvpr-robust/) doi:10.1109/CVPR52688.2022.00780

BibTeX

@inproceedings{wortsman2022cvpr-robust,
  title     = {{Robust Fine-Tuning of Zero-Shot Models}},
  author    = {Wortsman, Mitchell and Ilharco, Gabriel and Kim, Jong Wook and Li, Mike and Kornblith, Simon and Roelofs, Rebecca and Lopes, Raphael Gontijo and Hajishirzi, Hannaneh and Farhadi, Ali and Namkoong, Hongseok and Schmidt, Ludwig},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {7959-7971},
  doi       = {10.1109/CVPR52688.2022.00780},
  url       = {https://mlanthology.org/cvpr/2022/wortsman2022cvpr-robust/}
}