TableFormer: Table Structure Understanding with Transformers
Abstract
Tables organize valuable content in a concise and compact representation. This content is extremely valuable for systems such as search engines, Knowledge Graph's, etc, since they enhance their predictive capabilities. Unfortunately, tables come in a large variety of shapes and sizes. Furthermore, they can have complex column/row-header configurations, multiline rows, different variety of separation lines, missing entries, etc. As such, the correct identification of the table-structure from an image is a non-trivial task. In this paper, we present a new table-structure identification model. The latter improves the latest end-to-end deep learning model (i.e. encoder-dual-decoder from PubTabNet) in two significant ways. First, we introduce a new object detection decoder for table-cells. In this way, we can obtain the content of the table-cells from programmatic PDF's directly from the PDF source and avoid the training of the custom OCR decoders. This architectural change leads to more accurate table-content extraction and allows us to tackle non-english tables. Second, we replace the LSTM decoders with transformer based decoders. This upgrade improves significantly the previous state-of-the-art tree-editing-distance-score (TEDS) from 91% to 98.5% on simple tables and from 88.7% to 95% on complex tables.
Cite
Text
Nassar et al. "TableFormer: Table Structure Understanding with Transformers." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00457Markdown
[Nassar et al. "TableFormer: Table Structure Understanding with Transformers." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/nassar2022cvpr-tableformer/) doi:10.1109/CVPR52688.2022.00457BibTeX
@inproceedings{nassar2022cvpr-tableformer,
title = {{TableFormer: Table Structure Understanding with Transformers}},
author = {Nassar, Ahmed and Livathinos, Nikolaos and Lysak, Maksym and Staar, Peter},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {4614-4623},
doi = {10.1109/CVPR52688.2022.00457},
url = {https://mlanthology.org/cvpr/2022/nassar2022cvpr-tableformer/}
}