Efficient Kernel Machines Using the Improved Fast Gauss Transform

Abstract

The computation and memory required for kernel machines with N train- ing samples is at least O(N 2). Such a complexity is significant even for moderate size problems and is prohibitive for large datasets. We present an approximation technique based on the improved fast Gauss transform to reduce the computation to O(N ). We also give an error bound for the approximation, and provide experimental results on the UCI datasets.

Cite

Text

Yang et al. "Efficient Kernel Machines Using the Improved Fast Gauss Transform." Neural Information Processing Systems, 2004.

Markdown

[Yang et al. "Efficient Kernel Machines Using the Improved Fast Gauss Transform." Neural Information Processing Systems, 2004.](https://mlanthology.org/neurips/2004/yang2004neurips-efficient/)

BibTeX

@inproceedings{yang2004neurips-efficient,
  title     = {{Efficient Kernel Machines Using the Improved Fast Gauss Transform}},
  author    = {Yang, Changjiang and Duraiswami, Ramani and Davis, Larry S.},
  booktitle = {Neural Information Processing Systems},
  year      = {2004},
  pages     = {1561-1568},
  url       = {https://mlanthology.org/neurips/2004/yang2004neurips-efficient/}
}