TabNet: Attentive Interpretable Tabular Learning
Abstract
We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet. TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability and more efficient learning as the learning capacity is used for the most salient features. We demonstrate that TabNet outperforms other variants on a wide range of non-performance-saturated tabular datasets and yields interpretable feature attributions plus insights into its global behavior. Finally, we demonstrate self-supervised learning for tabular data, significantly improving performance when unlabeled data is abundant.
Cite
Text
Arik and Pfister. "TabNet: Attentive Interpretable Tabular Learning." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I8.16826Markdown
[Arik and Pfister. "TabNet: Attentive Interpretable Tabular Learning." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/arik2021aaai-tabnet/) doi:10.1609/AAAI.V35I8.16826BibTeX
@inproceedings{arik2021aaai-tabnet,
title = {{TabNet: Attentive Interpretable Tabular Learning}},
author = {Arik, Sercan Ö. and Pfister, Tomas},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {6679-6687},
doi = {10.1609/AAAI.V35I8.16826},
url = {https://mlanthology.org/aaai/2021/arik2021aaai-tabnet/}
}