CARTE: Pretraining and Transfer for Tabular Learning
Abstract
Pretrained deep-learning models are the go-to solution for images or text. However, for tabular data the standard is still to train tree-based models. Indeed, transfer learning on tables hits the challenge of data integration: finding correspondences, correspondences in the entries (entity matching) where different words may denote the same entity, correspondences across columns (schema matching), which may come in different orders, names... We propose a neural architecture that does not need such correspondences. As a result, we can pretrain it on background data that has not been matched. The architecture –CARTE for Context Aware Representation of Table Entries– uses a graph representation of tabular (or relational) data to process tables with different columns, string embedding of entries and columns names to model an open vocabulary, and a graph-attentional network to contextualize entries with column names and neighboring entries. An extensive benchmark shows that CARTE facilitates learning, outperforming a solid set of baselines including the best tree-based models. CARTE also enables joint learning across tables with unmatched columns, enhancing a small table with bigger ones. CARTE opens the door to large pretrained models for tabular data.
Cite
Text
Kim et al. "CARTE: Pretraining and Transfer for Tabular Learning." International Conference on Machine Learning, 2024.Markdown
[Kim et al. "CARTE: Pretraining and Transfer for Tabular Learning." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/kim2024icml-carte/)BibTeX
@inproceedings{kim2024icml-carte,
title = {{CARTE: Pretraining and Transfer for Tabular Learning}},
author = {Kim, Myung Jun and Grinsztajn, Leo and Varoquaux, Gael},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {23843-23866},
volume = {235},
url = {https://mlanthology.org/icml/2024/kim2024icml-carte/}
}