Not-so-Random Features
Abstract
We propose a principled method for kernel learning, which relies on a Fourier-analytic characterization of translation-invariant or rotation-invariant kernels. Our method produces a sequence of feature maps, iteratively refining the SVM margin. We provide rigorous guarantees for optimality and generalization, interpreting our algorithm as online equilibrium-finding dynamics in a certain two-player min-max game. Evaluations on synthetic and real-world datasets demonstrate scalability and consistent improvements over related random features-based methods.
Cite
Text
Bullins et al. "Not-so-Random Features." International Conference on Learning Representations, 2018.Markdown
[Bullins et al. "Not-so-Random Features." International Conference on Learning Representations, 2018.](https://mlanthology.org/iclr/2018/bullins2018iclr-notsorandom/)BibTeX
@inproceedings{bullins2018iclr-notsorandom,
title = {{Not-so-Random Features}},
author = {Bullins, Brian and Zhang, Cyril and Zhang, Yi},
booktitle = {International Conference on Learning Representations},
year = {2018},
url = {https://mlanthology.org/iclr/2018/bullins2018iclr-notsorandom/}
}