Track the Answer: Extending TextVQA from Image to Video with Spatio-Temporal Clues

Abstract

Video text-based visual question answering (TextVQA) is a practical task that aims to answer questions by jointly reasoning textual and visual information in a given video. Inspired by the development of TextVQA in image domain, existing Video TextVQA approaches leverage a language model (e.g. T5) to process text-rich multiple frames and generate answers auto-regressively. Nevertheless, the spatio-temporal relationships among visual entities (including scene text and objects) will be disrupted and models are susceptible to interference from unrelated information, resulting in irrational reasoning and inaccurate answering. To tackle these challenges, we propose the TEA (stands for "Track the Answer'') method that better extends the generative TextVQA framework from image to video. TEA recovers the spatio-temporal relationships in a complementary way and incorporates OCR-aware clues to enhance the quality of reasoning questions. Extensive experiments on several public Video TextVQA datasets validate the effectiveness and generalization of our framework. TEA outperforms existing TextVQA methods, video-language pretraining methods and video large language models by great margins. The code will be publicly released.

Cite

Text

Zhang et al. "Track the Answer: Extending TextVQA from Image to Video with Spatio-Temporal Clues." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I10.33115

Markdown

[Zhang et al. "Track the Answer: Extending TextVQA from Image to Video with Spatio-Temporal Clues." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/zhang2025aaai-track/) doi:10.1609/AAAI.V39I10.33115

BibTeX

@inproceedings{zhang2025aaai-track,
  title     = {{Track the Answer: Extending TextVQA from Image to Video with Spatio-Temporal Clues}},
  author    = {Zhang, Yan and Zeng, Gangyan and Shen, Huawen and Wu, Daiqing and Zhou, Yu and Ma, Can},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {10275-10283},
  doi       = {10.1609/AAAI.V39I10.33115},
  url       = {https://mlanthology.org/aaai/2025/zhang2025aaai-track/}
}