Alignment-Enriched Tuning for Patch-Level Pre-Trained Document Image Models
Abstract
Alignment between image and text has shown promising improvements on patch-level pre-trained document image models. However, investigating more effective or finer-grained alignment techniques during pre-training requires a large amount of computation cost and time. Thus, a question naturally arises: Could we fine-tune the pre-trained models adaptive to downstream tasks with alignment objectives and achieve comparable or better performance? In this paper, we propose a new model architecture with alignment-enriched tuning (dubbed AETNet) upon pre-trained document image models, to adapt downstream tasks with the joint task-specific supervised and alignment-aware contrastive objective. Specifically, we introduce an extra visual transformer as the alignment-ware image encoder and an extra text transformer as the alignment-ware text encoder before multimodal fusion. We consider alignment in the following three aspects: 1) document-level alignment by leveraging the cross-modal and intra-modal contrastive loss; 2) global-local alignment for modeling localized and structural information in document images; and 3) local-level alignment for more accurate patch-level information. Experiments on various downstream tasks show that AETNet can achieve state-of-the-art performance on various downstream tasks. Notably, AETNet consistently outperforms state-of-the-art pre-trained models, such as LayoutLMv3 with fine-tuning techniques, on three different downstream tasks. Code is available at https://github.com/MAEHCM/AET.
Cite
Text
Wang et al. "Alignment-Enriched Tuning for Patch-Level Pre-Trained Document Image Models." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I2.25357Markdown
[Wang et al. "Alignment-Enriched Tuning for Patch-Level Pre-Trained Document Image Models." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/wang2023aaai-alignment/) doi:10.1609/AAAI.V37I2.25357BibTeX
@inproceedings{wang2023aaai-alignment,
title = {{Alignment-Enriched Tuning for Patch-Level Pre-Trained Document Image Models}},
author = {Wang, Lei and He, Jiabang and Xu, Xing and Liu, Ning and Liu, Hui},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {2590-2598},
doi = {10.1609/AAAI.V37I2.25357},
url = {https://mlanthology.org/aaai/2023/wang2023aaai-alignment/}
}