From Margin to Sparsity
Abstract
We present an improvement of Novikoff's perceptron convergence theorem. Reinterpreting this mistake bound as a margin dependent sparsity guarantee allows us to give a PAC-style generalisation er(cid:173) ror bound for the classifier learned by the perceptron learning algo(cid:173) rithm. The bound value crucially depends on the margin a support vector machine would achieve on the same data set using the same kernel. Ironically, the bound yields better guarantees than are cur(cid:173) rently available for the support vector solution itself.
Cite
Text
Graepel et al. "From Margin to Sparsity." Neural Information Processing Systems, 2000.Markdown
[Graepel et al. "From Margin to Sparsity." Neural Information Processing Systems, 2000.](https://mlanthology.org/neurips/2000/graepel2000neurips-margin/)BibTeX
@inproceedings{graepel2000neurips-margin,
title = {{From Margin to Sparsity}},
author = {Graepel, Thore and Herbrich, Ralf and Williamson, Robert C.},
booktitle = {Neural Information Processing Systems},
year = {2000},
pages = {210-216},
url = {https://mlanthology.org/neurips/2000/graepel2000neurips-margin/}
}