Prior-Fitted Networks Scale to Larger Datasets When Treated as Weak Learners

Abstract

Prior-Fitted Networks (PFNs) have recently been proposed to efficiently perform tabular classification tasks. Although they achieve good performance on small datasets, they encounter limitations with larger datasets. These limitations include significant memory consumption and increased computational complexity, primarily due to the impracticality of incorporating all training samples as inputs within these networks. To address these challenges, we investigate the fitting assumption for PFNs and input samples. Building on this understanding, we propose \emph{BoostPFN} designed to enhance the performance of these networks, especially for large-scale datasets. We also theoretically validate the convergence of BoostPFN and our empirical results demonstrate that the BoostPFN method can outperform standard PFNs with the same size of training samples in large datasets and achieve a significant acceleration in training times compared to other established baselines in the field, including widely-used Gradient Boosting Decision Trees (GBDTs), deep learning methods and AutoML systems. High performance is maintained for up to 50x of the pre-training size of PFNs, substantially extending the limit of training samples. Through this work, we address the challenges of efficiently handling large datasets via PFN-based models, paving the way for faster and more effective tabular data classification training and prediction process.

Cite

Text

Wang et al. "Prior-Fitted Networks Scale to Larger Datasets When Treated as Weak Learners." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.

Markdown

[Wang et al. "Prior-Fitted Networks Scale to Larger Datasets When Treated as Weak Learners." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.](https://mlanthology.org/aistats/2025/wang2025aistats-priorfitted/)

BibTeX

@inproceedings{wang2025aistats-priorfitted,
  title     = {{Prior-Fitted Networks Scale to Larger Datasets When Treated as Weak Learners}},
  author    = {Wang, Yuxin and Jiang, Botian and Guo, Yiran and Gan, Quan and Wipf, David and Huang, Xuanjing and Qiu, Xipeng},
  booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics},
  year      = {2025},
  pages     = {1090-1098},
  volume    = {258},
  url       = {https://mlanthology.org/aistats/2025/wang2025aistats-priorfitted/}
}