LiT: Zero-Shot Transfer with Locked-Image Text Tuning

Abstract

This paper presents contrastive-tuning, a simple method employing contrastive training to align image and text models while still taking advantage of their pre-training. In our empirical study we find that locked pre-trained image models with unlocked text models work best. We call this instance of contrastive-tuning "Locked-image Tuning" (LiT), which just teaches a text model to read out good representations from a pre-trained image model for new tasks. A LiT model gains the capability of zero-shot transfer to new vision tasks, such as image classification or retrieval. The proposed LiT is widely applicable; it works reliably with multiple pre-training methods (supervised and unsupervised) and across diverse architectures (ResNet, Vision Transformers and MLP-Mixer) using three different image-text datasets. With the transformer-based pre-trained ViT-g/14 model, the LiT model achieves 84.5% zero-shot transfer accuracy on the ImageNet test set, and 81.1% on the challenging out-of-distribution ObjectNet test set.

Cite

Text

Zhai et al. "LiT: Zero-Shot Transfer with Locked-Image Text Tuning." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01759

Markdown

[Zhai et al. "LiT: Zero-Shot Transfer with Locked-Image Text Tuning." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/zhai2022cvpr-lit/) doi:10.1109/CVPR52688.2022.01759

BibTeX

@inproceedings{zhai2022cvpr-lit,
  title     = {{LiT: Zero-Shot Transfer with Locked-Image Text Tuning}},
  author    = {Zhai, Xiaohua and Wang, Xiao and Mustafa, Basil and Steiner, Andreas and Keysers, Daniel and Kolesnikov, Alexander and Beyer, Lucas},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {18123-18133},
  doi       = {10.1109/CVPR52688.2022.01759},
  url       = {https://mlanthology.org/cvpr/2022/zhai2022cvpr-lit/}
}