Localized Latent Updates for Fine-Tuning Vision-Language Models

Abstract

Although massive pre-trained vision-language models like CLIP show impressive generalization capabilities for many tasks, still it often remains necessary to fine-tune them for improved performance on specific datasets. When doing so, it is desirable that updating the model is fast and that the model does not lose its capabilities on data outside of the dataset, as is often the case with classical fine-tuning approaches. In this work we suggest a lightweight adapter that only updates the models predictions close to seen datapoints. We demonstrate the effectiveness and speed of this relatively simple approach in the context of few-shot learning, where our results both on classes seen and unseen during training are comparable with or improve on the state of the art.

Cite

Text

Ibing et al. "Localized Latent Updates for Fine-Tuning Vision-Language Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023. doi:10.1109/CVPRW59228.2023.00474

Markdown

[Ibing et al. "Localized Latent Updates for Fine-Tuning Vision-Language Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023.](https://mlanthology.org/cvprw/2023/ibing2023cvprw-localized/) doi:10.1109/CVPRW59228.2023.00474

BibTeX

@inproceedings{ibing2023cvprw-localized,
  title     = {{Localized Latent Updates for Fine-Tuning Vision-Language Models}},
  author    = {Ibing, Moritz and Lim, Isaak and Kobbelt, Leif},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2023},
  pages     = {4509-4518},
  doi       = {10.1109/CVPRW59228.2023.00474},
  url       = {https://mlanthology.org/cvprw/2023/ibing2023cvprw-localized/}
}