PORTAL: Scalable Tabular Foundation Models via Content-Specific Tokenization

Abstract

Self-supervised learning on tabular data seeks to apply advances from natural language and image domains to the diverse domain of tables. However, current techniques often struggle with integrating multi-domain data and require data cleaning or specific structural requirements, limiting the scalability of pre-training datasets. We introduce PORTAL (Pretraining One-Row-at-a-Time for All tabLes), a framework that handles various data modalities without the need for cleaning or preprocessing. This simple yet powerful approach can be effectively pre-trained on online-collected datasets and fine-tuned to match state-of-the-art methods on complex classification and regression tasks. This work offers a practical advancement in self-supervised learning for large-scale tabular data.

Cite

Text

Spinaci et al. "PORTAL: Scalable Tabular Foundation Models via Content-Specific Tokenization." NeurIPS 2024 Workshops: TRL, 2024.

Markdown

[Spinaci et al. "PORTAL: Scalable Tabular Foundation Models via Content-Specific Tokenization." NeurIPS 2024 Workshops: TRL, 2024.](https://mlanthology.org/neuripsw/2024/spinaci2024neuripsw-portal/)

BibTeX

@inproceedings{spinaci2024neuripsw-portal,
  title     = {{PORTAL: Scalable Tabular Foundation Models via Content-Specific Tokenization}},
  author    = {Spinaci, Marco and Polewczyk, Marek and Hoffart, Johannes and Kohler, Markus C. and Thelin, Sam and Klein, Tassilo},
  booktitle = {NeurIPS 2024 Workshops: TRL},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/spinaci2024neuripsw-portal/}
}