Making Pre-Trained Language Models Great on Tabular Prediction
Abstract
The transferability of deep neural networks (DNNs) has made significant progress in image and language processing. However, due to the heterogeneity among tables, such DNN bonus is still far from being well exploited on tabular data prediction (e.g., regression or classification tasks). Condensing knowledge from diverse domains, language models (LMs) possess the capability to comprehend feature names from various tables, potentially serving as versatile learners in transferring knowledge across distinct tables and diverse prediction tasks, but their discrete text representation space is inherently incompatible with numerical feature values in tables. In this paper, we present TP-BERTa, a specifically pre-trained LM for tabular data prediction. Concretely, a novel relative magnitude tokenization converts scalar numerical feature values to finely discrete, high-dimensional tokens, and an intra-feature attention approach integrates feature values with the corresponding feature names. Comprehensive experiments demonstrate that our pre-trained TP-BERTa leads the performance among tabular DNNs and is competitive with Gradient Boosted Decision Tree models in typical tabular data regime.
Cite
Text
Yan et al. "Making Pre-Trained Language Models Great on Tabular Prediction." International Conference on Learning Representations, 2024.Markdown
[Yan et al. "Making Pre-Trained Language Models Great on Tabular Prediction." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/yan2024iclr-making/)BibTeX
@inproceedings{yan2024iclr-making,
title = {{Making Pre-Trained Language Models Great on Tabular Prediction}},
author = {Yan, Jiahuan and Zheng, Bo and Xu, Hongxia and Zhu, Yiheng and Chen, Danny and Sun, Jimeng and Wu, Jian and Chen, Jintai},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/yan2024iclr-making/}
}