Convergence of Large Margin Separable Linear Classification
Abstract
Large margin linear classification methods have been successfully ap(cid:173) plied to many applications. For a linearly separable problem, it is known that under appropriate assumptions, the expected misclassification error of the computed "optimal hyperplane" approaches zero at a rate propor(cid:173) tional to the inverse training sample size. This rate is usually charac(cid:173) terized by the margin and the maximum norm of the input data. In this paper, we argue that another quantity, namely the robustness of the in(cid:173) put data distribution, also plays an important role in characterizing the convergence behavior of expected misclassification error. Based on this concept of robustness, we show that for a large margin separable linear classification problem, the expected misclassification error may converge exponentially in the number of training sample size.
Cite
Text
Zhang. "Convergence of Large Margin Separable Linear Classification." Neural Information Processing Systems, 2000.Markdown
[Zhang. "Convergence of Large Margin Separable Linear Classification." Neural Information Processing Systems, 2000.](https://mlanthology.org/neurips/2000/zhang2000neurips-convergence/)BibTeX
@inproceedings{zhang2000neurips-convergence,
title = {{Convergence of Large Margin Separable Linear Classification}},
author = {Zhang, Tong},
booktitle = {Neural Information Processing Systems},
year = {2000},
pages = {357-363},
url = {https://mlanthology.org/neurips/2000/zhang2000neurips-convergence/}
}