Improved Convergence Rates for Sparse Approximation Methods in Kernel-Based Learning

Abstract

Kernel-based models such as kernel ridge regression and Gaussian processes are ubiquitous in machine learning applications for regression and optimization. It is well known that a major downside for kernel-based models is the high computational cost; given a dataset of $n$ samples, the cost grows as $\mathcal{O}(n^3)$. Existing sparse approximation methods can yield a significant reduction in the computational cost, effectively reducing the actual cost down to as low as $\mathcal{O}(n)$ in certain cases. Despite this remarkable empirical success, significant gaps remain in the existing results for the analytical bounds on the error due to approximation. In this work, we provide novel confidence intervals for the Nyström method and the sparse variational Gaussian process approximation method, which we establish using novel interpretations of the approximate (surrogate) posterior variance of the models. Our confidence intervals lead to improved performance bounds in both regression and optimization problems.

Cite

Text

Vakili et al. "Improved Convergence Rates for Sparse Approximation Methods in Kernel-Based Learning." International Conference on Machine Learning, 2022.

Markdown

[Vakili et al. "Improved Convergence Rates for Sparse Approximation Methods in Kernel-Based Learning." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/vakili2022icml-improved/)

BibTeX

@inproceedings{vakili2022icml-improved,
  title     = {{Improved Convergence Rates for Sparse Approximation Methods in Kernel-Based Learning}},
  author    = {Vakili, Sattar and Scarlett, Jonathan and Shiu, Da-Shan and Bernacchia, Alberto},
  booktitle = {International Conference on Machine Learning},
  year      = {2022},
  pages     = {21960-21983},
  volume    = {162},
  url       = {https://mlanthology.org/icml/2022/vakili2022icml-improved/}
}