Vision Grid Transformer for Document Layout Analysis
Abstract
Document pre-trained models and grid-based models have proven to be very effective on various tasks in Document AI. However, for the document layout analysis (DLA) task, existing document pre-trained models, even those pre-trained in a multi-modal fashion, usually rely on either textual features or visual features. Grid-based models for DLA are multi-modality but largely neglect the effect of pre-training. To fully leverage multi-modal information and exploit pre-training techniques to learn better representation for DLA, in this paper, we present VGT, a two-stream Vision Grid Transformer, in which Grid Transformer (GiT) is proposed and pre-trained for 2D token-level and segment-level semantic understanding. Furthermore, a new dataset named D^4LA, which is so far the most diverse and detailed manually-annotated benchmark for document layout analysis, is curated and released. Experiment results have illustrated that the proposed VGT model achieves new state-of-the-art results on DLA tasks, e.g. PubLayNet (95.7% to 96.2%), DocBank (79.6% to 84.1%), and D^4LA (67.7% to 68.8%). The code and models as well as the D4LA dataset will be made publicly available.
Cite
Text
Da et al. "Vision Grid Transformer for Document Layout Analysis." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.01783Markdown
[Da et al. "Vision Grid Transformer for Document Layout Analysis." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/da2023iccv-vision/) doi:10.1109/ICCV51070.2023.01783BibTeX
@inproceedings{da2023iccv-vision,
title = {{Vision Grid Transformer for Document Layout Analysis}},
author = {Da, Cheng and Luo, Chuwei and Zheng, Qi and Yao, Cong},
booktitle = {International Conference on Computer Vision},
year = {2023},
pages = {19462-19472},
doi = {10.1109/ICCV51070.2023.01783},
url = {https://mlanthology.org/iccv/2023/da2023iccv-vision/}
}