But How Does It Work in Theory? Linear SVM with Random Features

Abstract

We prove that, under low noise assumptions, the support vector machine with $N\ll m$ random features (RFSVM) can achieve the learning rate faster than $O(1/\sqrt{m})$ on a training set with $m$ samples when an optimized feature map is used. Our work extends the previous fast rate analysis of random features method from least square loss to 0-1 loss. We also show that the reweighted feature selection method, which approximates the optimized feature map, helps improve the performance of RFSVM in experiments on a synthetic data set.

Cite

Text

Sun et al. "But How Does It Work in Theory? Linear SVM with Random Features." Neural Information Processing Systems, 2018.

Markdown

[Sun et al. "But How Does It Work in Theory? Linear SVM with Random Features." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/sun2018neurips-work/)

BibTeX

@inproceedings{sun2018neurips-work,
  title     = {{But How Does It Work in Theory? Linear SVM with Random Features}},
  author    = {Sun, Yitong and Gilbert, Anna and Tewari, Ambuj},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {3379-3388},
  url       = {https://mlanthology.org/neurips/2018/sun2018neurips-work/}
}