PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees
Abstract
Meta-learning can successfully acquire useful inductive biases from data. Yet, its generalization properties to unseen learning tasks are poorly understood. Particularly if the number of meta-training tasks is small, this raises concerns about overfitting. We provide a theoretical analysis using the PAC-Bayesian framework and derive novel generalization bounds for meta-learning. Using these bounds, we develop a class of PAC-optimal meta-learning algorithms with performance guarantees and a principled meta-level regularization. Unlike previous PAC-Bayesian meta-learners, our method results in a standard stochastic optimization problem which can be solved efficiently and scales well.When instantiating our PAC-optimal hyper-posterior (PACOH) with Gaussian processes and Bayesian Neural Networks as base learners, the resulting methods yield state-of-the-art performance, both in terms of predictive accuracy and the quality of uncertainty estimates. Thanks to their principled treatment of uncertainty, our meta-learners can also be successfully employed for sequential decision problems.
Cite
Text
Rothfuss et al. "PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees." International Conference on Machine Learning, 2021.Markdown
[Rothfuss et al. "PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/rothfuss2021icml-pacoh/)BibTeX
@inproceedings{rothfuss2021icml-pacoh,
title = {{PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees}},
author = {Rothfuss, Jonas and Fortuin, Vincent and Josifoski, Martin and Krause, Andreas},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {9116-9126},
volume = {139},
url = {https://mlanthology.org/icml/2021/rothfuss2021icml-pacoh/}
}