Using Analytic QP and Sparseness to Speed Training of Support Vector Machines
Abstract
Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) problem. This paper proposes an al(cid:173) gorithm for training SVMs: Sequential Minimal Optimization, or SMO. SMO breaks the large QP problem into a series of smallest possible QP problems which are analytically solvable. Thus, SMO does not require a numerical QP library. SMO's computation time is dominated by eval(cid:173) uation of the kernel, hence kernel optimizations substantially quicken SMO. For the MNIST database, SMO is 1.7 times as fast as PCG chunk(cid:173) ing; while for the UCI Adult database and linear SVMs, SMO can be 1500 times faster than the PCG chunking algorithm.
Cite
Text
Platt. "Using Analytic QP and Sparseness to Speed Training of Support Vector Machines." Neural Information Processing Systems, 1998.Markdown
[Platt. "Using Analytic QP and Sparseness to Speed Training of Support Vector Machines." Neural Information Processing Systems, 1998.](https://mlanthology.org/neurips/1998/platt1998neurips-using/)BibTeX
@inproceedings{platt1998neurips-using,
title = {{Using Analytic QP and Sparseness to Speed Training of Support Vector Machines}},
author = {Platt, John C.},
booktitle = {Neural Information Processing Systems},
year = {1998},
pages = {557-563},
url = {https://mlanthology.org/neurips/1998/platt1998neurips-using/}
}