Support Vector Machines, Data Reduction, and Approximate Kernel Matrices
Abstract
The computational and/or communication constraints associated with processing large-scale data sets using support vector machines (SVM) in contexts such as distributed networking systems are often prohibitively high, resulting in practitioners of SVM learning algorithms having to apply the algorithm on approximate versions of the kernel matrix induced by a certain degree of data reduction. In this paper, we study the tradeoffs between data reduction and the loss in an algorithm’s classification performance. We introduce and analyze a consistent estimator of the SVM’s achieved classification error, and then derive approximate upper bounds on the perturbation on our estimator. The bound is shown to be empirically tight in a wide range of domains, making it practical for the practitioner to determine the amount of data reduction given a permissible loss in the classification performance.
Cite
Text
Nguyen et al. "Support Vector Machines, Data Reduction, and Approximate Kernel Matrices." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2008. doi:10.1007/978-3-540-87481-2_10Markdown
[Nguyen et al. "Support Vector Machines, Data Reduction, and Approximate Kernel Matrices." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2008.](https://mlanthology.org/ecmlpkdd/2008/nguyen2008ecmlpkdd-support/) doi:10.1007/978-3-540-87481-2_10BibTeX
@inproceedings{nguyen2008ecmlpkdd-support,
title = {{Support Vector Machines, Data Reduction, and Approximate Kernel Matrices}},
author = {Nguyen, XuanLong and Huang, Ling and Joseph, Anthony D.},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2008},
pages = {137-153},
doi = {10.1007/978-3-540-87481-2_10},
url = {https://mlanthology.org/ecmlpkdd/2008/nguyen2008ecmlpkdd-support/}
}