GRAM: Global Reasoning for Multi-Page VQA
Abstract
The increasing use of transformer-based large language models brings forward the challenge of processing long sequences. In document visual question answering (DocVQA) leading methods focus on the single-page setting while documents can span hundreds of pages. We present GRAM a method that seamlessly extends pre-trained single-page models to the multi-page setting without requiring computationally-heavy pretraining. To do so we leverage a single-page encoder for local page-level understanding and enhance it with document-level designated layers and learnable tokens facilitating the flow of information across pages for global reasoning. To enforce our model to utilize the newly introduced document tokens we propose a tailored bias adaptation method. For additional computational savings during decoding we introduce an optional compression stage using our compression-transformer (CFormer)reducing the encoded sequence length thereby allowing a tradeoff between quality and latency. Extensive experiments showcase GRAM's state-of-the-art performance on the benchmarks for multi-page DocVQA demonstrating the effectiveness of our approach.
Cite
Text
Blau et al. "GRAM: Global Reasoning for Multi-Page VQA." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01477Markdown
[Blau et al. "GRAM: Global Reasoning for Multi-Page VQA." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/blau2024cvpr-gram/) doi:10.1109/CVPR52733.2024.01477BibTeX
@inproceedings{blau2024cvpr-gram,
title = {{GRAM: Global Reasoning for Multi-Page VQA}},
author = {Blau, Tsachi and Fogel, Sharon and Ronen, Roi and Golts, Alona and Ganz, Roy and Avraham, Elad Ben and Aberdam, Aviad and Tsiper, Shahar and Litman, Ron},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2024},
pages = {15598-15607},
doi = {10.1109/CVPR52733.2024.01477},
url = {https://mlanthology.org/cvpr/2024/blau2024cvpr-gram/}
}