Scaling Gaussian Process Regression with Derivatives

Abstract

Gaussian processes (GPs) with derivatives are useful in many applications, including Bayesian optimization, implicit surface reconstruction, and terrain reconstruction. Fitting a GP to function values and derivatives at $n$ points in $d$ dimensions requires linear solves and log determinants with an ${n(d+1) \times n(d+1)}$ positive definite matrix-- leading to prohibitive $\mathcal{O}(n^3d^3)$ computations for standard direct methods. We propose iterative solvers using fast $\mathcal{O}(nd)$ matrix-vector multiplications (MVMs), together with pivoted Cholesky preconditioning that cuts the iterations to convergence by several orders of magnitude, allowing for fast kernel learning and prediction. Our approaches, together with dimensionality reduction, allows us to scale Bayesian optimization with derivatives to high-dimensional problems and large evaluation budgets.

Cite

Text

Eriksson et al. "Scaling Gaussian Process Regression with Derivatives." Neural Information Processing Systems, 2018.

Markdown

[Eriksson et al. "Scaling Gaussian Process Regression with Derivatives." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/eriksson2018neurips-scaling/)

BibTeX

@inproceedings{eriksson2018neurips-scaling,
  title     = {{Scaling Gaussian Process Regression with Derivatives}},
  author    = {Eriksson, David and Dong, Kun and Lee, Eric and Bindel, David and Wilson, Andrew G},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {6867-6877},
  url       = {https://mlanthology.org/neurips/2018/eriksson2018neurips-scaling/}
}