Incremental Gaussian Processes

Abstract

In this paper, we consider Tipping’s relevance vector machine (RVM) [1] and formalize an incremental training strategy as a variant of the expectation-maximization (EM) algorithm that we call Subspace EM (SSEM). Working with a subset of active basis functions, the sparsity of the RVM solution will ensure that the number of basis functions and thereby the computational complexity is kept low. We also introduce a mean field approach to the intractable classification model that is ex- pected to give a very good approximation to exact Bayesian inference and contains the Laplace approximation as a special case. We test the algorithms on two large data sets with O(103 (cid:0) 104) examples. The re- sults indicate that Bayesian learning of large data sets, e.g. the MNIST database is realistic.

Cite

Text

Quiñonero-candela and Winther. "Incremental Gaussian Processes." Neural Information Processing Systems, 2002.

Markdown

[Quiñonero-candela and Winther. "Incremental Gaussian Processes." Neural Information Processing Systems, 2002.](https://mlanthology.org/neurips/2002/quinonerocandela2002neurips-incremental/)

BibTeX

@inproceedings{quinonerocandela2002neurips-incremental,
  title     = {{Incremental Gaussian Processes}},
  author    = {Quiñonero-candela, Joaquin and Winther, Ole},
  booktitle = {Neural Information Processing Systems},
  year      = {2002},
  pages     = {1025-1032},
  url       = {https://mlanthology.org/neurips/2002/quinonerocandela2002neurips-incremental/}
}