Unicoder-VL: A Universal Encoder for Vision and Language by Cross-Modal Pre-Training

Abstract

We propose Unicoder-VL, a universal encoder that aims to learn joint representations of vision and language in a pre-training manner. Borrow ideas from cross-lingual pre-trained models, such as XLM (Lample and Conneau 2019) and Unicoder (Huang et al. 2019), both visual and linguistic contents are fed into a multi-layer Transformer (Vaswani et al. 2017) for the cross-modal pre-training, where three pre-trained tasks are employed, including Masked Language Modeling(MLM), Masked Object Classification(MOC) and Visual-linguistic Matching(VLM). The first two tasks learn context-aware representations for input tokens based on linguistic and visual contents jointly. The last task tries to predict whether an image and a text describe each other. After pretraining on large-scale image-caption pairs, we transfer Unicoder-VL to caption-based image-text retrieval and visual commonsense reasoning, with just one additional output layer. We achieve state-of-the-art or comparable results on both two tasks and show the powerful ability of the cross-modal pre-training.

Cite

Text

Li et al. "Unicoder-VL: A Universal Encoder for Vision and Language by Cross-Modal Pre-Training." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I07.6795

Markdown

[Li et al. "Unicoder-VL: A Universal Encoder for Vision and Language by Cross-Modal Pre-Training." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/li2020aaai-unicoder/) doi:10.1609/AAAI.V34I07.6795

BibTeX

@inproceedings{li2020aaai-unicoder,
  title     = {{Unicoder-VL: A Universal Encoder for Vision and Language by Cross-Modal Pre-Training}},
  author    = {Li, Gen and Duan, Nan and Fang, Yuejian and Gong, Ming and Jiang, Daxin},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {11336-11344},
  doi       = {10.1609/AAAI.V34I07.6795},
  url       = {https://mlanthology.org/aaai/2020/li2020aaai-unicoder/}
}