Training Nu-Support Vector Classifiers: Theory and Algorithms
Abstract
The ν-support vector machine (ν-SVM) for classification proposed by Schölkopf, Smola, Williamson, and Bartlett (2000) has the advantage of using a parameter ν on controlling the number of support vectors. In this article, we investigate the relation between ν-SVM and C-SVM in detail. We show that in general they are two different problems with the same optimal solution set. Hence, we may expect that many numerical aspects of solving them are similar. However, compared to regular C-SVM, the formulation of ν-SVM is more complicated, so up to now there have been no effective methods for solving large-scale ν-SVM. We propose a decomposition method for ν-SVM that is competitive with existing methods for C-SVM. We also discuss the behavior of ν-SVM by some numerical experiments.
Cite
Text
Chang and Lin. "Training Nu-Support Vector Classifiers: Theory and Algorithms." Neural Computation, 2001. doi:10.1162/089976601750399335Markdown
[Chang and Lin. "Training Nu-Support Vector Classifiers: Theory and Algorithms." Neural Computation, 2001.](https://mlanthology.org/neco/2001/chang2001neco-training/) doi:10.1162/089976601750399335BibTeX
@article{chang2001neco-training,
title = {{Training Nu-Support Vector Classifiers: Theory and Algorithms}},
author = {Chang, Chih-Chung and Lin, Chih-Jen},
journal = {Neural Computation},
year = {2001},
pages = {2119-2147},
doi = {10.1162/089976601750399335},
volume = {13},
url = {https://mlanthology.org/neco/2001/chang2001neco-training/}
}