Approximate Maximum Margin Algorithms with Rules Controlled by the Number of Mistakes

Abstract

We present a family of incremental Perceptron-like algorithms (PLAs) with margin in which both the "effective" learning rate, defined as the ratio of the learning rate to the length of the weight vector, and the misclassification condition are entirely controlled by rules involving (powers of ) the number of mistakes. We examine the convergence of such algorithms in a finite number of steps and show that under some rather mild conditions there exists a limit of the parameters involved in which convergence leads to classification with maximum margin. An experimental comparison of algorithms belonging to this family with other large margin PLAs and decomposition SVMs is also presented.

Cite

Text

Tsampouka and Shawe-Taylor. "Approximate Maximum Margin Algorithms with Rules Controlled by the Number of Mistakes." International Conference on Machine Learning, 2007. doi:10.1145/1273496.1273610

Markdown

[Tsampouka and Shawe-Taylor. "Approximate Maximum Margin Algorithms with Rules Controlled by the Number of Mistakes." International Conference on Machine Learning, 2007.](https://mlanthology.org/icml/2007/tsampouka2007icml-approximate/) doi:10.1145/1273496.1273610

BibTeX

@inproceedings{tsampouka2007icml-approximate,
  title     = {{Approximate Maximum Margin Algorithms with Rules Controlled by the Number of Mistakes}},
  author    = {Tsampouka, Petroula and Shawe-Taylor, John},
  booktitle = {International Conference on Machine Learning},
  year      = {2007},
  pages     = {903-910},
  doi       = {10.1145/1273496.1273610},
  url       = {https://mlanthology.org/icml/2007/tsampouka2007icml-approximate/}
}