VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts

Abstract

We present a unified Vision-Language pretrained Model (VLMo) that jointly learns a dual encoder and a fusion encoder with a modular Transformer network. Specifically, we introduce Multiway Transformer, where each block contains a pool of modality-specific experts and a shared self-attention layer. Because of the modeling flexibility of Multiway Transformer, pretrained VLMo can be fine-tuned as a fusion encoder for vision-language classification tasks, or used as a dual encoder for efficient image-text retrieval. Moreover, we propose a stagewise pre-training strategy, which effectively leverages large-scale image-only and text-only data besides image-text pairs. Experimental results show that VLMo achieves state-of-the-art results on various vision-language tasks, including VQA, NLVR2 and image-text retrieval.

Cite

Text

Bao et al. "VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts." Neural Information Processing Systems, 2022.

Markdown

[Bao et al. "VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/bao2022neurips-vlmo/)

BibTeX

@inproceedings{bao2022neurips-vlmo,
  title     = {{VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts}},
  author    = {Bao, Hangbo and Wang, Wenhui and Dong, Li and Liu, Qiang and Mohammed, Owais Khan and Aggarwal, Kriti and Som, Subhojit and Piao, Songhao and Wei, Furu},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/bao2022neurips-vlmo/}
}