Sparse Kernel SVMs via Cutting-Plane Training

Abstract

While Support Vector Machines (SVMs) with kernels offer great flexibility and prediction performance on many application problems, their practical use is often hindered by the following two problems. Both problems can be traced back to the number of Support Vectors (SVs), which is known to generally grow linearly with the data set size [1]. First, training is slower than other methods and linear SVMs, where recent advances in training algorithms vastly improved training time. $h(x)={\rm sign} \left[\sum^{\#SV}_{i=1} \alpha_iK(x_i, x)\right]$ it is too expensive to evaluate in many applications when the number of SVs is large.

Cite

Text

Joachims and Yu. "Sparse Kernel SVMs via Cutting-Plane Training." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2009. doi:10.1007/978-3-642-04180-8_8

Markdown

[Joachims and Yu. "Sparse Kernel SVMs via Cutting-Plane Training." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2009.](https://mlanthology.org/ecmlpkdd/2009/joachims2009ecmlpkdd-sparse/) doi:10.1007/978-3-642-04180-8_8

BibTeX

@inproceedings{joachims2009ecmlpkdd-sparse,
  title     = {{Sparse Kernel SVMs via Cutting-Plane Training}},
  author    = {Joachims, Thorsten and Yu, Chun-Nam John},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2009},
  pages     = {8},
  doi       = {10.1007/978-3-642-04180-8_8},
  url       = {https://mlanthology.org/ecmlpkdd/2009/joachims2009ecmlpkdd-sparse/}
}