PreSTU: Pre-Training for Scene-Text Understanding

Abstract

The ability to recognize and reason about text embedded in visual inputs is often lacking in vision-and-language (V&L) models, perhaps because V&L pre-training methods have often failed to include such an ability in their training objective. In this paper, we propose PreSTU, a novel pre-training recipe dedicated to scene-text understanding (STU). PreSTU introduces OCR-aware pre-training objectives that encourage the model to recognize text from an image and connect it to the rest of the image content. We implement PreSTU using a simple transformer-based encoder-decoder architecture, combined with large-scale image-text datasets with scene text obtained from an off-the-shelf OCR system. We empirically demonstrate the effectiveness of this pre-training approach on eight visual question answering and four image captioning benchmarks.

Cite

Text

Kil et al. "PreSTU: Pre-Training for Scene-Text Understanding." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.01401

Markdown

[Kil et al. "PreSTU: Pre-Training for Scene-Text Understanding." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/kil2023iccv-prestu/) doi:10.1109/ICCV51070.2023.01401

BibTeX

@inproceedings{kil2023iccv-prestu,
  title     = {{PreSTU: Pre-Training for Scene-Text Understanding}},
  author    = {Kil, Jihyung and Changpinyo, Soravit and Chen, Xi and Hu, Hexiang and Goodman, Sebastian and Chao, Wei-Lun and Soricut, Radu},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {15270-15280},
  doi       = {10.1109/ICCV51070.2023.01401},
  url       = {https://mlanthology.org/iccv/2023/kil2023iccv-prestu/}
}