Kernel Interpolation with Sparse Grids

Abstract

Structured kernel interpolation (SKI) accelerates Gaussian processes (GP) inference by interpolating the kernel covariance function using a dense grid of inducing points, whose corresponding kernel matrix is highly structured and thus amenable to fast linear algebra. Unfortunately, SKI scales poorly in the dimension of the input points, since the dense grid size grows exponentially with the dimension. To mitigate this issue, we propose the use of sparse grids within the SKI framework. These grids enable accurate interpolation, but with a number of points growing more slowly with dimension. We contribute a novel nearly linear time matrix-vector multiplication algorithm for the sparse grid kernel matrix. We also describe how sparse grids can be combined with an efficient interpolation scheme based on simplicial complexes. With these modifications, we demonstrate that SKI can be scaled to higher dimensions while maintaining accuracy, for both synthetic and real datasets.

Cite

Text

Yadav et al. "Kernel Interpolation with Sparse Grids." Neural Information Processing Systems, 2022.

Markdown

[Yadav et al. "Kernel Interpolation with Sparse Grids." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/yadav2022neurips-kernel/)

BibTeX

@inproceedings{yadav2022neurips-kernel,
  title     = {{Kernel Interpolation with Sparse Grids}},
  author    = {Yadav, Mohit and Sheldon, Daniel R. and Musco, Cameron},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/yadav2022neurips-kernel/}
}