The Relaxed Online Maximum Margin Algorithm
Abstract
We describe a new incremental algorithm for training linear thresh(cid:173) old functions: the Relaxed Online Maximum Margin Algorithm, or ROMMA. ROMMA can be viewed as an approximation to the algorithm that repeatedly chooses the hyperplane that classifies previously seen ex(cid:173) amples correctly with the maximum margin. It is known that such a maximum-margin hypothesis can be computed by minimizing the length of the weight vector subject to a number of linear constraints. ROMMA works by maintaining a relatively simple relaxation of these constraints that can be efficiently updated. We prove a mistake bound for ROMMA that is the same as that proved for the perceptron algorithm. Our analysis implies that the more computationally intensive maximum-margin algo(cid:173) rithm also satisfies this mistake bound; this is the first worst-case perfor(cid:173) mance guarantee for this algorithm. We describe some experiments us(cid:173) ing ROMMA and a variant that updates its hypothesis more aggressively as batch algorithms to recognize handwritten digits. The computational complexity and simplicity of these algorithms is similar to that of per(cid:173) ceptron algorithm , but their generalization is much better. We describe a sense in which the performance of ROMMA converges to that of SVM in the limit if bias isn't considered.
Cite
Text
Li and Long. "The Relaxed Online Maximum Margin Algorithm." Neural Information Processing Systems, 1999.Markdown
[Li and Long. "The Relaxed Online Maximum Margin Algorithm." Neural Information Processing Systems, 1999.](https://mlanthology.org/neurips/1999/li1999neurips-relaxed/)BibTeX
@inproceedings{li1999neurips-relaxed,
title = {{The Relaxed Online Maximum Margin Algorithm}},
author = {Li, Yi and Long, Philip M.},
booktitle = {Neural Information Processing Systems},
year = {1999},
pages = {498-504},
url = {https://mlanthology.org/neurips/1999/li1999neurips-relaxed/}
}