Meta-Learning Reliable Priors in the Function Space
Abstract
When data are scarce meta-learning can improve a learner's accuracy by harnessing previous experience from related learning tasks. However, existing methods have unreliable uncertainty estimates which are often overconfident. Addressing these shortcomings, we introduce a novel meta-learning framework, called F-PACOH, that treats meta-learned priors as stochastic processes and performs meta-level regularization directly in the function space. This allows us to directly steer the probabilistic predictions of the meta-learner towards high epistemic uncertainty in regions of insufficient meta-training data and, thus, obtain well-calibrated uncertainty estimates. Finally, we showcase how our approach can be integrated with sequential decision making, where reliable uncertainty quantification is imperative. In our benchmark study on meta-learning for Bayesian Optimization (BO), F-PACOH significantly outperforms all other meta-learners and standard baselines.
Cite
Text
Rothfuss et al. "Meta-Learning Reliable Priors in the Function Space." NeurIPS 2021 Workshops: MetaLearn, 2021.Markdown
[Rothfuss et al. "Meta-Learning Reliable Priors in the Function Space." NeurIPS 2021 Workshops: MetaLearn, 2021.](https://mlanthology.org/neuripsw/2021/rothfuss2021neuripsw-metalearning/)BibTeX
@inproceedings{rothfuss2021neuripsw-metalearning,
title = {{Meta-Learning Reliable Priors in the Function Space}},
author = {Rothfuss, Jonas and Heyn, Dominique and Chen, Jinfan and Krause, Andreas},
booktitle = {NeurIPS 2021 Workshops: MetaLearn},
year = {2021},
url = {https://mlanthology.org/neuripsw/2021/rothfuss2021neuripsw-metalearning/}
}