Training Support Vector Machines via SMO-Type Decomposition Methods

Abstract

This article gives a comprehensive study on SMO-type (Sequential Minimal Optimization) decomposition methods for training support vector machines. We propose a general and flexible selection of the two-element working set. Main theoretical results include 1) a simple asymptotic convergence proof, 2) a useful explanation of the shrinking and caching techniques, and 3) the linear convergence of this method. This analysis applies to any SMO-type implementation whose selection falls into the proposed framework.

Cite

Text

Chen et al. "Training Support Vector Machines via SMO-Type Decomposition Methods." International Conference on Algorithmic Learning Theory, 2005. doi:10.1007/11564089_6

Markdown

[Chen et al. "Training Support Vector Machines via SMO-Type Decomposition Methods." International Conference on Algorithmic Learning Theory, 2005.](https://mlanthology.org/alt/2005/chen2005alt-training/) doi:10.1007/11564089_6

BibTeX

@inproceedings{chen2005alt-training,
  title     = {{Training Support Vector Machines via SMO-Type Decomposition Methods}},
  author    = {Chen, Pai-Hsuen and Fan, Rong-En and Lin, Chih-Jen},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {2005},
  pages     = {45-62},
  doi       = {10.1007/11564089_6},
  url       = {https://mlanthology.org/alt/2005/chen2005alt-training/}
}