EquiTabPFN: A Target-Permutation Equivariant Prior Fitted Network
Abstract
Recent foundational models for tabular data, such as TabPFN, excel at adapting to new tasks via in-context learning but remain constrained to a fixed, pre-defined number of target dimensions—often necessitating costly ensembling strategies. We trace this constraint to a deeper architectural shortcoming: these models lack target-equivariance, so that permuting target-dimension orderings alters their predictions. This deficiency gives rise to an irreducible “equivariance gap,” an error term that introduces instability in predictions. We eliminate this gap by designing a fully target-equivariant architecture—ensuring permutation invariance via equivariant encoders, decoders, and a bi-attention mechanism. Empirical evaluation on standard classification benchmarks shows that, on datasets with more classes than those seen during pre-training, our model matches or surpasses existing methods while incurring lower computational overhead.
Cite
Text
Arbel et al. "EquiTabPFN: A Target-Permutation Equivariant Prior Fitted Network." Advances in Neural Information Processing Systems, 2025.Markdown
[Arbel et al. "EquiTabPFN: A Target-Permutation Equivariant Prior Fitted Network." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/arbel2025neurips-equitabpfn/)BibTeX
@inproceedings{arbel2025neurips-equitabpfn,
title = {{EquiTabPFN: A Target-Permutation Equivariant Prior Fitted Network}},
author = {Arbel, Michael and Salinas, David and Hutter, Frank},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/arbel2025neurips-equitabpfn/}
}