PAC-Bayesian Bound for Gaussian Process Regression and Multiple Kernel Additive Model

Abstract

We develop a PAC-Bayesian bound for the convergence rate of a Bayesian variant of Multiple Kernel Learning (MKL) that is an estimation method for the sparse additive model. Standard analyses for MKL require a strong condition on the design analogous to the restricted eigenvalue condition for the analysis of Lasso and Dantzig selector. In this paper, we apply PAC-Bayesian technique to show that the Bayesian variant of MKL achieves the optimal convergence rate without such strong conditions on the design. Basically our approach is a combination of PAC-Bayes and recently developed theories of non-parametric Gaussian process regressions. Our bound is developed in a fixed design situation. Our analysis includes the existing result of Gaussian process as a special case and the proof is much simpler by virtue of PAC-Bayesian technique. We also give the convergence rate of the Bayesian variant of Group Lasso as a finite dimensional special case.

Cite

Text

Suzuki. "PAC-Bayesian Bound for Gaussian Process Regression and Multiple Kernel Additive Model." Proceedings of the 25th Annual Conference on Learning Theory, 2012.

Markdown

[Suzuki. "PAC-Bayesian Bound for Gaussian Process Regression and Multiple Kernel Additive Model." Proceedings of the 25th Annual Conference on Learning Theory, 2012.](https://mlanthology.org/colt/2012/suzuki2012colt-pacbayesian/)

BibTeX

@inproceedings{suzuki2012colt-pacbayesian,
  title     = {{PAC-Bayesian Bound for Gaussian Process Regression and Multiple Kernel Additive Model}},
  author    = {Suzuki, Taiji},
  booktitle = {Proceedings of the 25th Annual Conference on Learning Theory},
  year      = {2012},
  pages     = {8.1-8.20},
  volume    = {23},
  url       = {https://mlanthology.org/colt/2012/suzuki2012colt-pacbayesian/}
}