SKIing on Simplices: Kernel Interpolation on the Permutohedral Lattice for Scalable Gaussian Processes
Abstract
State-of-the-art methods for scalable Gaussian processes use iterative algorithms, requiring fast matrix vector multiplies (MVMs) with the co-variance kernel. The Structured Kernel Interpolation (SKI) framework accelerates these MVMs by performing efficient MVMs on a grid and interpolating back to the original space. In this work, we develop a connection between SKI and the permutohedral lattice used for high-dimensional fast bilateral filtering. Using a sparse simplicial grid instead of a dense rectangular one, we can perform GP inference exponentially faster in the dimension than SKI. Our approach, Simplex-GP, enables scaling SKI to high dimensions, while maintaining strong predictive performance. We additionally provide a CUDA implementation of Simplex-GP, which enables significant GPU acceleration of MVM based inference.
Cite
Text
Kapoor et al. "SKIing on Simplices: Kernel Interpolation on the Permutohedral Lattice for Scalable Gaussian Processes." International Conference on Machine Learning, 2021.Markdown
[Kapoor et al. "SKIing on Simplices: Kernel Interpolation on the Permutohedral Lattice for Scalable Gaussian Processes." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/kapoor2021icml-skiing/)BibTeX
@inproceedings{kapoor2021icml-skiing,
title = {{SKIing on Simplices: Kernel Interpolation on the Permutohedral Lattice for Scalable Gaussian Processes}},
author = {Kapoor, Sanyam and Finzi, Marc and Wang, Ke Alexander and Wilson, Andrew Gordon Gordon},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {5279-5289},
volume = {139},
url = {https://mlanthology.org/icml/2021/kapoor2021icml-skiing/}
}