Nyström Kernel Mean Embeddings
Abstract
Kernel mean embeddings are a powerful tool to represent probability distributions over arbitrary spaces as single points in a Hilbert space. Yet, the cost of computing and storing such embeddings prohibits their direct use in large-scale settings. We propose an efficient approximation procedure based on the Nystr{ö}m method, which exploits a small random subset of the dataset. Our main result is an upper bound on the approximation error of this procedure. It yields sufficient conditions on the subsample size to obtain the standard (1/sqrt(n)) rate while reducing computational costs. We discuss applications of this result for the approximation of the maximum mean discrepancy and quadrature rules, and we illustrate our theoretical findings with numerical experiments.
Cite
Text
Chatalic et al. "Nyström Kernel Mean Embeddings." International Conference on Machine Learning, 2022.Markdown
[Chatalic et al. "Nyström Kernel Mean Embeddings." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/chatalic2022icml-nystrom/)BibTeX
@inproceedings{chatalic2022icml-nystrom,
title = {{Nyström Kernel Mean Embeddings}},
author = {Chatalic, Antoine and Schreuder, Nicolas and Rosasco, Lorenzo and Rudi, Alessandro},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {3006-3024},
volume = {162},
url = {https://mlanthology.org/icml/2022/chatalic2022icml-nystrom/}
}