On the Error of Random Fourier Features

Abstract

Kernel methods give powerful, flexible, and theoretically grounded approaches to solving many problems in machine learning. The standard approach, however, requires pairwise evaluations of a kernel function, which can lead to scalability issues for very large datasets. Rahimi and Recht (2007) suggested a popular approach to handling this problem, known as random Fourier features. The quality of this approximation, however, is not well understood. We improve the uniform error bound of that paper, as well as giving novel understandings of the embedding's variance, approximation error, and use in some machine learning methods. We also point out that surprisingly, of the two main variants of those features, the more widely used is strictly higher-variance for the Gaussian kernel and has worse bounds.

Cite

Text

Sutherland and Schneider. "On the Error of Random Fourier Features." Conference on Uncertainty in Artificial Intelligence, 2015.

Markdown

[Sutherland and Schneider. "On the Error of Random Fourier Features." Conference on Uncertainty in Artificial Intelligence, 2015.](https://mlanthology.org/uai/2015/sutherland2015uai-error/)

BibTeX

@inproceedings{sutherland2015uai-error,
  title     = {{On the Error of Random Fourier Features}},
  author    = {Sutherland, Danica J. and Schneider, Jeff G.},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2015},
  pages     = {862-871},
  url       = {https://mlanthology.org/uai/2015/sutherland2015uai-error/}
}