Learning Gaussian Processes by Minimizing PAC-Bayesian Generalization Bounds
Abstract
Gaussian Processes (GPs) are a generic modelling tool for supervised learning. While they have been successfully applied on large datasets, their use in safety-critical applications is hindered by the lack of good performance guarantees. To this end, we propose a method to learn GPs and their sparse approximations by directly optimizing a PAC-Bayesian bound on their generalization performance, instead of maximizing the marginal likelihood. Besides its theoretical appeal, we find in our evaluation that our learning method is robust and yields significantly better generalization guarantees than other common GP approaches on several regression benchmark datasets.
Cite
Text
Reeb et al. "Learning Gaussian Processes by Minimizing PAC-Bayesian Generalization Bounds." Neural Information Processing Systems, 2018.Markdown
[Reeb et al. "Learning Gaussian Processes by Minimizing PAC-Bayesian Generalization Bounds." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/reeb2018neurips-learning/)BibTeX
@inproceedings{reeb2018neurips-learning,
title = {{Learning Gaussian Processes by Minimizing PAC-Bayesian Generalization Bounds}},
author = {Reeb, David and Doerr, Andreas and Gerwinn, Sebastian and Rakitsch, Barbara},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {3337-3347},
url = {https://mlanthology.org/neurips/2018/reeb2018neurips-learning/}
}