BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning
Abstract
Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets. Code and checkpoints are available at https://github.com/microsoft/BridgeTower.
Cite
Text
Xu et al. "BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I9.26263Markdown
[Xu et al. "BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/xu2023aaai-bridgetower/) doi:10.1609/AAAI.V37I9.26263BibTeX
@inproceedings{xu2023aaai-bridgetower,
title = {{BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning}},
author = {Xu, Xiao and Wu, Chenfei and Rosenman, Shachar and Lal, Vasudev and Che, Wanxiang and Duan, Nan},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {10637-10647},
doi = {10.1609/AAAI.V37I9.26263},
url = {https://mlanthology.org/aaai/2023/xu2023aaai-bridgetower/}
}