Hierarchical Visual Feature Aggregation for OCR-Free Document Understanding

Abstract

We present a novel OCR-free document understanding framework based on pretrained Multimodal Large Language Models (MLLMs). Our approach employs multi-scale visual features to effectively handle various font sizes within document images.To address the increasing costs of considering the multi-scale visual inputs for MLLMs, we propose the Hierarchical Visual Feature Aggregation (HVFA) module, designed to reduce the number of input tokens to LLMs. Leveraging a feature pyramid with cross-attentive pooling, our approach effectively manages the trade-off between information loss and efficiency without being affected by varying document image sizes.Furthermore, we introduce a novel instruction tuning task, which facilitates the model's text-reading capability by learning to predict the relative positions of input text, eventually minimizing the risk of truncated text caused by the limited capacity of LLMs.Comprehensive experiments validate the effectiveness of our approach, demonstrating superior performance in various document understanding tasks.

Cite

Text

Park et al. "Hierarchical Visual Feature Aggregation for OCR-Free Document Understanding." Neural Information Processing Systems, 2024. doi:10.52202/079017-3362

Markdown

[Park et al. "Hierarchical Visual Feature Aggregation for OCR-Free Document Understanding." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/park2024neurips-hierarchical/) doi:10.52202/079017-3362

BibTeX

@inproceedings{park2024neurips-hierarchical,
  title     = {{Hierarchical Visual Feature Aggregation for OCR-Free Document Understanding}},
  author    = {Park, Jaeyoo and Choi, Jin Young and Park, Jeonghyung and Han, Bohyung},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-3362},
  url       = {https://mlanthology.org/neurips/2024/park2024neurips-hierarchical/}
}