Fourier Kernel Learning

Abstract

Approximations based on random Fourier embeddings have recently emerged as an efficient and formally consistent methodology to design large-scale kernel machines [23]. By expressing the kernel as a Fourier expansion, features are generated based on a finite set of random basis projections, sampled from the Fourier transform of the kernel, with inner products that are Monte Carlo approximations of the original non-linear model. Based on the observation that different kernel-induced Fourier sampling distributions correspond to different kernel parameters, we show that a scalable optimization process in the Fourier domain can be used to identify the different frequency bands that are useful for prediction on training data. This approach allows us to design a family of linear prediction models where we can learn the hyper-parameters of the kernel together with the weights of the feature vectors jointly. Under this methodology, we recover efficient and scalable linear reformulations for both single and multiple kernel learning. Experiments show that our linear models produce fast and accurate predictors for complex datasets such as the Visual Object Challenge 2011 and ImageNet ILSVRC 2011.

Cite

Text

Bazavan et al. "Fourier Kernel Learning." European Conference on Computer Vision, 2012. doi:10.1007/978-3-642-33709-3_33

Markdown

[Bazavan et al. "Fourier Kernel Learning." European Conference on Computer Vision, 2012.](https://mlanthology.org/eccv/2012/bazavan2012eccv-fourier/) doi:10.1007/978-3-642-33709-3_33

BibTeX

@inproceedings{bazavan2012eccv-fourier,
  title     = {{Fourier Kernel Learning}},
  author    = {Bazavan, Eduard Gabriel and Li, Fuxin and Sminchisescu, Cristian},
  booktitle = {European Conference on Computer Vision},
  year      = {2012},
  pages     = {459-473},
  doi       = {10.1007/978-3-642-33709-3_33},
  url       = {https://mlanthology.org/eccv/2012/bazavan2012eccv-fourier/}
}