VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks

Abstract

Recently, fine-tuning language models pre-trained on large text corpora have provided huge improvements on vision-and-language (V&L) tasks as well as on pure language tasks. However, fine-tuning the entire parameter set of pre-trained models becomes impractical since the model size is growing rapidly. Hence, in this paper, we introduce adapter-based parameter-efficient transfer learning techniques to V&L models such as VL-BART and VL-T5. We evaluate our methods in a unified multi-task setup on both image-text and video-text benchmarks. For the image-text tasks, we use four diverse V&L datasets: VQAv2, GQA, NLVR2, and MSCOCO image captioning. For video-text tasks, we use TVQA, How2QA, TVC, and YC2C. With careful training and thorough experiments, we benchmark three popular adapter-based methods (Adapter, Hyperformer, Compacter) against the standard full fine-tuning and the recently proposed prompt-tuning approach. We also enhance the efficiency and performance of adapters by sharing their weights to attain knowledge across tasks. Our results demonstrate that training the adapter with the weight-sharing technique (4.18% of total parameters for image-text tasks and 3.39% for video-text tasks) can match the performance of fine-tuning the entire model. Lastly, we present a comprehensive analysis including the combination of adapter and task-specific prompts and the impact of V&L pre-training on adapters.

Cite

Text

Sung et al. "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00516

Markdown

[Sung et al. "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/sung2022cvpr-vladapter/) doi:10.1109/CVPR52688.2022.00516

BibTeX

@inproceedings{sung2022cvpr-vladapter,
  title     = {{VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks}},
  author    = {Sung, Yi-Lin and Cho, Jaemin and Bansal, Mohit},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {5227-5237},
  doi       = {10.1109/CVPR52688.2022.00516},
  url       = {https://mlanthology.org/cvpr/2022/sung2022cvpr-vladapter/}
}