PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees

Abstract

Meta-learning can successfully acquire useful inductive biases from data, especially when a large number of meta-tasks are available. Yet, its generalization properties to unseen tasks are poorly understood. Particularly if the number of meta-tasks is small, this raises concerns for potential overfitting. We provide a theoretical analysis using the PAC-Bayesian framework and derive novel generalization bounds for meta-learning with unbounded loss functions and Bayesian base learners. Using these bounds, we develop a class of PAC-optimal meta-learning algorithms with performance guarantees and a principled meta-regularization. When instantiating our PAC-optimal hyper-posterior (PACOH) with Gaussian processes as base learners, the resulting approach consistently outperforms several popular meta-learning methods, both in terms of predictive accuracy and the quality of its uncertainty estimates.

Cite

Text

Rothfuss et al. "PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees." ICML 2020 Workshops: LifelongML, 2020.

Markdown

[Rothfuss et al. "PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees." ICML 2020 Workshops: LifelongML, 2020.](https://mlanthology.org/icmlw/2020/rothfuss2020icmlw-pacoh/)

BibTeX

@inproceedings{rothfuss2020icmlw-pacoh,
  title     = {{PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees}},
  author    = {Rothfuss, Jonas and Fortuin, Vincent and Krause, Andreas},
  booktitle = {ICML 2020 Workshops: LifelongML},
  year      = {2020},
  url       = {https://mlanthology.org/icmlw/2020/rothfuss2020icmlw-pacoh/}
}