Fast Sparse Gaussian Process Methods: The Informative Vector Machine
Abstract
We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information- theoretic principles, previously suggested for active learning. Our goal is not only to learn d{sparse predictors (which can be evalu- ated in O(d) rather than O(n), d (cid:28) n, n the number of training points), but also to perform training under strong restrictions on time and memory requirements. The scaling of our method is at most O(n (cid:1) d2), and in large real-world classi(cid:12)cation experiments we show that it can match prediction performance of the popular support vector machine (SVM), yet can be signi(cid:12)cantly faster in training. In contrast to the SVM, our approximation produces esti- mates of predictive probabilities (‘error bars’), allows for Bayesian model selection and is less complex in implementation.
Cite
Text
Lawrence et al. "Fast Sparse Gaussian Process Methods: The Informative Vector Machine." Neural Information Processing Systems, 2002.Markdown
[Lawrence et al. "Fast Sparse Gaussian Process Methods: The Informative Vector Machine." Neural Information Processing Systems, 2002.](https://mlanthology.org/neurips/2002/lawrence2002neurips-fast/)BibTeX
@inproceedings{lawrence2002neurips-fast,
title = {{Fast Sparse Gaussian Process Methods: The Informative Vector Machine}},
author = {Lawrence, Neil D. and Seeger, Matthias and Herbrich, Ralf},
booktitle = {Neural Information Processing Systems},
year = {2002},
pages = {625-632},
url = {https://mlanthology.org/neurips/2002/lawrence2002neurips-fast/}
}