TabRet: Pre-Training Transformer-Based Tabular Models for Unseen Columns

Abstract

We present TabRet, a pre-trainable Transformer-based model for tabular data. TabRet is designed to work on a downstream task that contains columns not seen in pre-training. Unlike other methods, TabRet has an extra learning step before fine-tuning called retokenizing, which calibrates feature embeddings based on the masked autoencoding loss. In experiments, we pre-trained TabRet with a large collection of public health surveys and fine-tuned it on classification tasks in healthcare, and TabRet achieved the best AUC performance on four datasets. In addition, an ablation study shows retokenizing and random shuffle augmentation of columns during pre-training contributed to performance gains. The code is available at https://github.com/pfnet-research/tabret.

Cite

Text

Onishi et al. "TabRet: Pre-Training Transformer-Based Tabular Models for Unseen Columns." ICLR 2023 Workshops: ME-FoMo, 2023.

Markdown

[Onishi et al. "TabRet: Pre-Training Transformer-Based Tabular Models for Unseen Columns." ICLR 2023 Workshops: ME-FoMo, 2023.](https://mlanthology.org/iclrw/2023/onishi2023iclrw-tabret/)

BibTeX

@inproceedings{onishi2023iclrw-tabret,
  title     = {{TabRet: Pre-Training Transformer-Based Tabular Models for Unseen Columns}},
  author    = {Onishi, Soma and Oono, Kenta and Hayashi, Kohei},
  booktitle = {ICLR 2023 Workshops: ME-FoMo},
  year      = {2023},
  url       = {https://mlanthology.org/iclrw/2023/onishi2023iclrw-tabret/}
}