Random Projections for Support Vector Machines
Abstract
Let X be a data matrix of rank ρ, representing n points in d-dimensional space. The linear support vector machine constructs a hyperplane separator that maximizes the 1-norm soft margin. We develop a new oblivious dimension reduction technique which is precomputed and can be applied to any input matrix X. We prove that, with high probability, the margin and minimum enclosing ball in the feature space are preserved to within ε-relative error, ensuring comparable generalization as in the original space. We present extensive experiments with real and synthetic data to support our theory.
Cite
Text
Paul et al. "Random Projections for Support Vector Machines." International Conference on Artificial Intelligence and Statistics, 2013.Markdown
[Paul et al. "Random Projections for Support Vector Machines." International Conference on Artificial Intelligence and Statistics, 2013.](https://mlanthology.org/aistats/2013/paul2013aistats-random/)BibTeX
@inproceedings{paul2013aistats-random,
title = {{Random Projections for Support Vector Machines}},
author = {Paul, Saurabh and Boutsidis, Christos and Magdon-Ismail, Malik and Drineas, Petros},
booktitle = {International Conference on Artificial Intelligence and Statistics},
year = {2013},
pages = {498-506},
url = {https://mlanthology.org/aistats/2013/paul2013aistats-random/}
}