Exact Simplification of Support Vector Solutions (Kernel Machines Section)
Abstract
This paper demonstrates that standard algorithms for training support vector machines generally produce solutions with a greater number of support vectors than are strictly necessary. An algorithm is presented that allows unnecessary support vectors to be recognized and eliminated while leaving the solution otherwise unchanged. The algorithm is applied to a variety of benchmark data sets (for both classification and regression) and in most cases the procedure leads to a reduction in the number of support vectors. In some cases the reduction is substantial.
Cite
Text
Downs et al. "Exact Simplification of Support Vector Solutions (Kernel Machines Section)." Journal of Machine Learning Research, 2001.Markdown
[Downs et al. "Exact Simplification of Support Vector Solutions (Kernel Machines Section)." Journal of Machine Learning Research, 2001.](https://mlanthology.org/jmlr/2001/downs2001jmlr-exact/)BibTeX
@article{downs2001jmlr-exact,
title = {{Exact Simplification of Support Vector Solutions (Kernel Machines Section)}},
author = {Downs, Tom and Gates, Kevin E. and Masters, Annette},
journal = {Journal of Machine Learning Research},
year = {2001},
pages = {293-297},
volume = {2},
url = {https://mlanthology.org/jmlr/2001/downs2001jmlr-exact/}
}