GIT: A Generative Image-to-Text Transformer for Vision and Language
Abstract
In this paper, we design and train a Generative Image-to-text Transformer, GIT, to unify vision-language tasks such as image/video captioning and question answering. While generative models provide a consistent network architecture between pre-training and fine-tuning, existing work typically contains complex structures (uni/multi-modal encoder/decoder) and depends on external modules such as object detectors/taggers and optical character recognition (OCR). In GIT, we simplify the architecture as one image encoder and one text decoder under a single language modeling task. We also scale up the pre-training data and the model size to boost the model performance. Without bells and whistles, our GIT establishes new state of the arts on numerous challenging benchmarks with a large margin. For instance, our model surpasses the human performance for the first time on TextCaps (138.2 vs. 125.5 in CIDEr). Furthermore, we present a new scheme of generation-based image classification and scene text recognition, achieving decent performance on standard benchmarks.
Cite
Text
Wang et al. "GIT: A Generative Image-to-Text Transformer for Vision and Language." Transactions on Machine Learning Research, 2022.Markdown
[Wang et al. "GIT: A Generative Image-to-Text Transformer for Vision and Language." Transactions on Machine Learning Research, 2022.](https://mlanthology.org/tmlr/2022/wang2022tmlr-git/)BibTeX
@article{wang2022tmlr-git,
title = {{GIT: A Generative Image-to-Text Transformer for Vision and Language}},
author = {Wang, Jianfeng and Yang, Zhengyuan and Hu, Xiaowei and Li, Linjie and Lin, Kevin and Gan, Zhe and Liu, Zicheng and Liu, Ce and Wang, Lijuan},
journal = {Transactions on Machine Learning Research},
year = {2022},
url = {https://mlanthology.org/tmlr/2022/wang2022tmlr-git/}
}