Hollmann, Noah

13 publications

NeurIPS 2025 Do-PFN: In-Context Learning for Causal Effect Estimation Jake Robertson, Arik Reuter, Siyuan Guo, Noah Hollmann, Frank Hutter, Bernhard Schölkopf
ICML 2025 FairPFN: A Tabular Foundation Model for Causal Fairness Jake Robertson, Noah Hollmann, Samuel Müller, Noor Awad, Frank Hutter
ICML 2025 Position: The Future of Bayesian Prediction Is Prior-Fitted Samuel Müller, Arik Reuter, Noah Hollmann, David Rügamer, Frank Hutter
NeurIPS 2024 Drift-Resilient TabPFN: In-Context Learning Temporal Distribution Shifts on Tabular Data Kai Helli, David Schnurr, Noah Hollmann, Samuel Müller, Frank Hutter
NeurIPSW 2024 Drift-Resilient TabPFN: In-Context Learning Temporal Distribution Shifts on Tabular Data David Schnurr, Kai Helli, Noah Hollmann, Samuel Müller, Frank Hutter
ICMLW 2024 FairPFN: Transformers Can Do Counterfactual Fairness Jake Robertson, Noah Hollmann, Noor Awad, Frank Hutter
ICMLW 2023 CAAFE: Combining Large Language Models with Tabular Predictors for Semi-Automated Data Science Noah Hollmann, Samuel Müller, Frank Hutter
NeurIPS 2023 Large Language Models for Automated Data Science: Introducing CAAFE for Context-Aware Automated Feature Engineering Noah Hollmann, Samuel Müller, Frank Hutter
ICML 2023 PFNs4BO: In-Context Learning for Bayesian Optimization Samuel Müller, Matthias Feurer, Noah Hollmann, Frank Hutter
ICLR 2023 TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second Noah Hollmann, Samuel Müller, Katharina Eggensperger, Frank Hutter
NeurIPSW 2022 TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second Noah Hollmann, Samuel Müller, Katharina Eggensperger, Frank Hutter
ICLR 2022 Transformers Can Do Bayesian Inference Samuel Müller, Noah Hollmann, Sebastian Pineda Arango, Josif Grabocka, Frank Hutter
NeurIPSW 2021 Transformers Can Do Bayesian-Inference by Meta-Learning on Prior-Data Samuel Müller, Noah Hollmann, Sebastian Pineda Arango, Josif Grabocka, Frank Hutter