Worst-Case Bounds for Gaussian Process Models

Abstract

We present a competitive analysis of some non-parametric Bayesian al- gorithms in a worst-case online learning setting, where no probabilistic assumptions about the generation of the data are made. We consider models which use a Gaussian process prior (over the space of all func- tions) and provide bounds on the regret (under the log loss) for com- monly used non-parametric Bayesian algorithms — including Gaussian regression and logistic regression — which show how these algorithms can perform favorably under rather general conditions. These bounds ex- plicitly handle the infinite dimensionality of these non-parametric classes in a natural way. We also make formal connections to the minimax and minimum description length (MDL) framework. Here, we show precisely how Bayesian Gaussian regression is a minimax strategy.

Cite

Text

Kakade et al. "Worst-Case Bounds for Gaussian Process Models." Neural Information Processing Systems, 2005.

Markdown

[Kakade et al. "Worst-Case Bounds for Gaussian Process Models." Neural Information Processing Systems, 2005.](https://mlanthology.org/neurips/2005/kakade2005neurips-worstcase/)

BibTeX

@inproceedings{kakade2005neurips-worstcase,
  title     = {{Worst-Case Bounds for Gaussian Process Models}},
  author    = {Kakade, Sham M. and Seeger, Matthias W. and Foster, Dean P.},
  booktitle = {Neural Information Processing Systems},
  year      = {2005},
  pages     = {619-626},
  url       = {https://mlanthology.org/neurips/2005/kakade2005neurips-worstcase/}
}