Learning with Rigorous Support Vector Machines
Abstract
We examine the so-called rigorous support vector machine (RSVM) approach proposed by Vapnik (1998). The formulation of RSVM is derived by explicitly implementing the structural risk minimization principle with a parameter H used to directly control the VC dimension of the set of separating hyperplanes. By optimizing the dual problem, RSVM finds the optimal separating hyperplane from a set of functions with VC dimension approximate to H ^2+1. RSVM produces classifiers equivalent to those obtained by classic SVMs for appropriate parameter choices, but the use of the parameter H facilitates model selection, thus minimizing VC bounds on the generalization risk more effectively. In our empirical studies, good models are achieved for an appropriate H ^2 ∈ [5% ℓ, 30% ℓ] where ℓ is the size of training data.
Cite
Text
Bi and Vapnik. "Learning with Rigorous Support Vector Machines." Annual Conference on Computational Learning Theory, 2003. doi:10.1007/978-3-540-45167-9_19Markdown
[Bi and Vapnik. "Learning with Rigorous Support Vector Machines." Annual Conference on Computational Learning Theory, 2003.](https://mlanthology.org/colt/2003/bi2003colt-learning/) doi:10.1007/978-3-540-45167-9_19BibTeX
@inproceedings{bi2003colt-learning,
title = {{Learning with Rigorous Support Vector Machines}},
author = {Bi, Jinbo and Vapnik, Vladimir},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2003},
pages = {243-257},
doi = {10.1007/978-3-540-45167-9_19},
url = {https://mlanthology.org/colt/2003/bi2003colt-learning/}
}