BERTgrid: Contextualized Embedding for 2D Document Representation and Understanding

Abstract

For understanding generic documents, information like font sizes, column layout, and generally the positioning of words may carry semantic information that is crucial for solving a downstream document intelligence task. Our novel BERTgrid, which is based on Chargrid by Katti et al. (2018), represents a document as a grid of contextualized word piece embedding vectors, thereby making its spatial structure and semantics accessible to the processing neural network. The contextualized embedding vectors are retrieved from a BERT language model. We use BERTgrid in combination with a fully convolutional network on a semantic instance segmentation task for extracting fields from invoices. We demonstrate its performance on tabulated line item and document header field extraction.

Cite

Text

Denk and Reisswig. "BERTgrid: Contextualized Embedding for 2D Document Representation and Understanding." NeurIPS 2019 Workshops: Document_Intelligence, 2019.

Markdown

[Denk and Reisswig. "BERTgrid: Contextualized Embedding for 2D Document Representation and Understanding." NeurIPS 2019 Workshops: Document_Intelligence, 2019.](https://mlanthology.org/neuripsw/2019/denk2019neuripsw-bertgrid/)

BibTeX

@inproceedings{denk2019neuripsw-bertgrid,
  title     = {{BERTgrid: Contextualized Embedding for 2D Document Representation and Understanding}},
  author    = {Denk, Timo I. and Reisswig, Christian},
  booktitle = {NeurIPS 2019 Workshops: Document_Intelligence},
  year      = {2019},
  url       = {https://mlanthology.org/neuripsw/2019/denk2019neuripsw-bertgrid/}
}