Kernel Approximation via Empirical Orthogonal Decomposition for Unsupervised Feature Learning

Abstract

Kernel approximation methods are important tools for various machine learning problems. There are two major methods used to approximate the kernel function: the Nystrom method and the random features method. However, the Nystrom method requires relatively high-complexity post-processing to calculate a solution and the random features method does not provide sufficient generalization performance. In this paper, we propose a method that has good generalization performance without high-complexity postprocessing via empirical orthogonal decomposition using the probability distribution estimated from training data. We provide a bound for the approximation error of the proposed method. Our experiments show that the proposed method is better than the random features method and comparable with the Nystrom method in terms of the approximation error and classification accuracy. We also show that hierarchical feature extraction using our kernel approximation demonstrates better performance than the existing methods.

Cite

Text

Mukuta and Harada. "Kernel Approximation via Empirical Orthogonal Decomposition for Unsupervised Feature Learning." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.564

Markdown

[Mukuta and Harada. "Kernel Approximation via Empirical Orthogonal Decomposition for Unsupervised Feature Learning." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/mukuta2016cvpr-kernel/) doi:10.1109/CVPR.2016.564

BibTeX

@inproceedings{mukuta2016cvpr-kernel,
  title     = {{Kernel Approximation via Empirical Orthogonal Decomposition for Unsupervised Feature Learning}},
  author    = {Mukuta, Yusuke and Harada, Tatsuya},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2016},
  doi       = {10.1109/CVPR.2016.564},
  url       = {https://mlanthology.org/cvpr/2016/mukuta2016cvpr-kernel/}
}