Scaling Experiments in Self-Supervised Cross-Table Representation Learning
Abstract
To analyze the scaling potential of deep tabular representation learning models, we introduce a novel Transformer-based architecture specifically tailored to tabular data and cross-table representation learning by utilizing table-specific tokenizers and a shared Transformer backbone. Our training approach encompasses both single-table and cross-table models, trained via missing value imputation through a self-supervised masked cell recovery objective. To understand the scaling behavior of our method, we train models of varying sizes, ranging from approximately $10^4$ to $10^7$ parameters. These models are trained on a carefully curated pretraining dataset, consisting of 135 M training tokens sourced from 76 diverse datasets. We assess the scaling of our architecture in both single-table and cross-table pretraining setups by evaluating the pretrained models using linear probing on a curated set of benchmark datasets and comparing the results with conventional baselines.
Cite
Text
Schambach et al. "Scaling Experiments in Self-Supervised Cross-Table Representation Learning." NeurIPS 2023 Workshops: TRL, 2023.Markdown
[Schambach et al. "Scaling Experiments in Self-Supervised Cross-Table Representation Learning." NeurIPS 2023 Workshops: TRL, 2023.](https://mlanthology.org/neuripsw/2023/schambach2023neuripsw-scaling/)BibTeX
@inproceedings{schambach2023neuripsw-scaling,
title = {{Scaling Experiments in Self-Supervised Cross-Table Representation Learning}},
author = {Schambach, Maximilian and Paul, Dominique and Otterbach, Johannes},
booktitle = {NeurIPS 2023 Workshops: TRL},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/schambach2023neuripsw-scaling/}
}