Constant Rate Approximate Maximum Margin Algorithms
Abstract
We present a new class of Perceptron-like algorithms with margin in which the “effective” learning rate η _eff, defined as the ratio of the learning rate to the length of the weight vector, remains constant. We prove that for η _eff sufficiently small the new algorithms converge in a finite number of steps and show that there exists a limit of the parameters involved in which convergence leads to classification with maximum margin. A soft margin extension for Perceptron-like large margin classifiers is also discussed.
Cite
Text
Tsampouka and Shawe-Taylor. "Constant Rate Approximate Maximum Margin Algorithms." European Conference on Machine Learning, 2006. doi:10.1007/11871842_42Markdown
[Tsampouka and Shawe-Taylor. "Constant Rate Approximate Maximum Margin Algorithms." European Conference on Machine Learning, 2006.](https://mlanthology.org/ecmlpkdd/2006/tsampouka2006ecml-constant/) doi:10.1007/11871842_42BibTeX
@inproceedings{tsampouka2006ecml-constant,
title = {{Constant Rate Approximate Maximum Margin Algorithms}},
author = {Tsampouka, Petroula and Shawe-Taylor, John},
booktitle = {European Conference on Machine Learning},
year = {2006},
pages = {437-448},
doi = {10.1007/11871842_42},
url = {https://mlanthology.org/ecmlpkdd/2006/tsampouka2006ecml-constant/}
}