Efficient Learning of Linear Perceptrons

Abstract

We consider the existence of efficient algorithms for learning the class of half-spaces in ~n in the agnostic learning model (Le., mak(cid:173) ing no prior assumptions on the example-generating distribution). The resulting combinatorial problem - finding the best agreement half-space over an input sample - is NP hard to approximate to within some constant factor. We suggest a way to circumvent this theoretical bound by introducing a new measure of success for such algorithms. An algorithm is IL-margin successful if the agreement ratio of the half-space it outputs is as good as that of any half-space once training points that are inside the IL-margins of its separating hyper-plane are disregarded. We prove crisp computational com(cid:173) plexity results with respect to this success measure: On one hand, for every positive IL, there exist efficient (poly-time) IL-margin suc(cid:173) cessful learning algorithms. On the other hand, we prove that unless P=NP, there is no algorithm that runs in time polynomial in the sample size and in 1/ IL that is IL-margin successful for all IL> O.

Cite

Text

Ben-David and Simon. "Efficient Learning of Linear Perceptrons." Neural Information Processing Systems, 2000.

Markdown

[Ben-David and Simon. "Efficient Learning of Linear Perceptrons." Neural Information Processing Systems, 2000.](https://mlanthology.org/neurips/2000/bendavid2000neurips-efficient/)

BibTeX

@inproceedings{bendavid2000neurips-efficient,
  title     = {{Efficient Learning of Linear Perceptrons}},
  author    = {Ben-David, Shai and Simon, Hans-Ulrich},
  booktitle = {Neural Information Processing Systems},
  year      = {2000},
  pages     = {189-195},
  url       = {https://mlanthology.org/neurips/2000/bendavid2000neurips-efficient/}
}