Sparse Greedy Gaussian Process Regression
Abstract
We present a simple sparse greedy technique to approximate the maximum a posteriori estimate of Gaussian Processes with much improved scaling behaviour in the sample size m. In particular, computational requirements are O(n2m), storage is O(nm), the cost for prediction is 0 ( n) and the cost to compute confidence bounds is O(nm), where n «: m. We show how to compute a stopping criterion, give bounds on the approximation error, and show applications to large scale problems.
Cite
Text
Smola and Bartlett. "Sparse Greedy Gaussian Process Regression." Neural Information Processing Systems, 2000.Markdown
[Smola and Bartlett. "Sparse Greedy Gaussian Process Regression." Neural Information Processing Systems, 2000.](https://mlanthology.org/neurips/2000/smola2000neurips-sparse/)BibTeX
@inproceedings{smola2000neurips-sparse,
title = {{Sparse Greedy Gaussian Process Regression}},
author = {Smola, Alex J. and Bartlett, Peter L.},
booktitle = {Neural Information Processing Systems},
year = {2000},
pages = {619-625},
url = {https://mlanthology.org/neurips/2000/smola2000neurips-sparse/}
}