Learning Optimally Sparse Support Vector Machines
Abstract
We show how to train SVMs with an optimal guarantee on the number of support vectors (up to constants), and with sample complexity and training runtime bounds matching the best known for kernel SVM optimization (i.e. without any additional asymptotic cost beyond standard SVM training). Our method is simple to implement and works well in practice.
Cite
Text
Cotter et al. "Learning Optimally Sparse Support Vector Machines." International Conference on Machine Learning, 2013.Markdown
[Cotter et al. "Learning Optimally Sparse Support Vector Machines." International Conference on Machine Learning, 2013.](https://mlanthology.org/icml/2013/cotter2013icml-learning/)BibTeX
@inproceedings{cotter2013icml-learning,
title = {{Learning Optimally Sparse Support Vector Machines}},
author = {Cotter, Andrew and Shalev-Shwartz, Shai and Srebro, Nati},
booktitle = {International Conference on Machine Learning},
year = {2013},
pages = {266-274},
volume = {28},
url = {https://mlanthology.org/icml/2013/cotter2013icml-learning/}
}