A Randomized Algorithm for Large Scale Support Vector Learning

Abstract

We propose a randomized algorithm for large scale SVM learning which solves the problem by iterating over random subsets of the data. Crucial to the algorithm for scalability is the size of the subsets chosen. In the context of text classification we show that, by using ideas from random projections, a sample size of O(log n) can be used to obtain a solution which is close to the optimal with a high probability. Experiments done on synthetic and real life data sets demonstrate that the algorithm scales up SVM learners, without loss in accuracy.

Cite

Text

Kumar et al. "A Randomized Algorithm for Large Scale Support Vector Learning." Neural Information Processing Systems, 2007.

Markdown

[Kumar et al. "A Randomized Algorithm for Large Scale Support Vector Learning." Neural Information Processing Systems, 2007.](https://mlanthology.org/neurips/2007/kumar2007neurips-randomized/)

BibTeX

@inproceedings{kumar2007neurips-randomized,
  title     = {{A Randomized Algorithm for Large Scale Support Vector Learning}},
  author    = {Kumar, Krishnan and Bhattacharya, Chiru and Hariharan, Ramesh},
  booktitle = {Neural Information Processing Systems},
  year      = {2007},
  pages     = {793-800},
  url       = {https://mlanthology.org/neurips/2007/kumar2007neurips-randomized/}
}