Bayesian Optimization with a Neural Network Meta-Learned on Synthetic Data Only
Abstract
Bayesian Optimization (BO) is an effective approach to optimize black-box functions, relying on a probabilistic surrogate to model the response surface. In this work, we propose to use a Prior-data Fitted Network (PFN) as a cheap and flexible surrogate. PFNs are neural networks that approximate the Posterior Predictive Distribution (PPD) in a single forward-pass. Most importantly, they can approximate the PPD for any prior distribution that we can sample from efficiently. Additionally, we show what is required for PFNs to be used in a standard BO setting with common acquisition functions. We evaluated the performance of a PFN surrogate for Hyperparameter optimization (HPO), a major application of BO. While the method can still fail for some search spaces, we fare comparable or better than the state-of-the-art on the HPO-B and PD1 benchmark.
Cite
Text
Müller et al. "Bayesian Optimization with a Neural Network Meta-Learned on Synthetic Data Only." NeurIPS 2022 Workshops: MetaLearn, 2022.Markdown
[Müller et al. "Bayesian Optimization with a Neural Network Meta-Learned on Synthetic Data Only." NeurIPS 2022 Workshops: MetaLearn, 2022.](https://mlanthology.org/neuripsw/2022/muller2022neuripsw-bayesian/)BibTeX
@inproceedings{muller2022neuripsw-bayesian,
title = {{Bayesian Optimization with a Neural Network Meta-Learned on Synthetic Data Only}},
author = {Müller, Samuel and Arango, Sebastian Pineda and Feurer, Matthias and Grabocka, Josif and Hutter, Frank},
booktitle = {NeurIPS 2022 Workshops: MetaLearn},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/muller2022neuripsw-bayesian/}
}