Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification

Abstract

Vision-Language Models (VLMs) such as CLIP are trained on large amounts of image-text pairs resulting in remarkable generalization across several data distributions. However in several cases their expensive training and data collection/curation costs do not justify the end application. This motivates a vendor-client paradigm where a vendor trains a large-scale VLM and grants only input-output access to clients on a pay-per-query basis in a black-box setting. The client aims to minimize inference cost by distilling the VLM to a student model using the limited available task-specific data and further deploying this student model in the downstream application. While naive distillation largely improves the In-Domain (ID) accuracy of the student it fails to transfer the superior out-of-distribution (OOD) generalization of the VLM teacher using the limited available labeled images. To mitigate this we propose Vision-Language to Vision - Align Distill Predict (VL2V-ADiP) which first aligns the vision and language modalities of the teacher model with the vision modality of a pre-trained student model and further distills the aligned VLM representations to the student. This maximally retains the pre-trained features of the student while also incorporating the rich representations of the VLM image encoder and the superior generalization of the text embeddings. The proposed approach achieves state-of-the-art results on the standard Domain Generalization benchmarks in a black-box teacher setting as well as a white-box setting where the weights of the VLM are accessible.

Cite

Text

Addepalli et al. "Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.02258

Markdown

[Addepalli et al. "Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/addepalli2024cvpr-leveraging/) doi:10.1109/CVPR52733.2024.02258

BibTeX

@inproceedings{addepalli2024cvpr-leveraging,
  title     = {{Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification}},
  author    = {Addepalli, Sravanti and Asokan, Ashish Ramayee and Sharma, Lakshay and Babu, R. Venkatesh},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {23922-23932},
  doi       = {10.1109/CVPR52733.2024.02258},
  url       = {https://mlanthology.org/cvpr/2024/addepalli2024cvpr-leveraging/}
}