Breaking SVM Complexity with Cross-Training

Abstract

We propose to selectively remove examples from the training set using probabilistic estimates related to editing algorithms (Devijver and Kittler, 1982). This heuristic procedure aims at creating a separable distribution of training examples with minimal impact on the position of the decision boundary. It breaks the linear dependency between the number of SVs and the number of training examples, and sharply reduces the complexity of SVMs during both the training and prediction stages.

Cite

Text

Bottou et al. "Breaking SVM Complexity with Cross-Training." Neural Information Processing Systems, 2004.

Markdown

[Bottou et al. "Breaking SVM Complexity with Cross-Training." Neural Information Processing Systems, 2004.](https://mlanthology.org/neurips/2004/bottou2004neurips-breaking/)

BibTeX

@inproceedings{bottou2004neurips-breaking,
  title     = {{Breaking SVM Complexity with Cross-Training}},
  author    = {Bottou, Léon and Weston, Jason and Bakir, Gökhan H.},
  booktitle = {Neural Information Processing Systems},
  year      = {2004},
  pages     = {81-88},
  url       = {https://mlanthology.org/neurips/2004/bottou2004neurips-breaking/}
}