Learning Linear Threshold Approximations Using Perceptrons
Abstract
We demonstrate sufficient conditions for polynomial learnability of suboptimal linear threshold functions using perceptrons. The central result is as follows. Suppose there exists a vector w*, of n weights (including the threshold) with “accuracy” 1 − α, “average error” η, and “balancing separation” σ, i.e., with probability 1 − α, w* correctly classifies an example x; over examples incorrectly classified by w*, the expected value of |w* · x| is η (source of inaccuracy does not matter); and over a certain portion of correctly classified examples, the expected value of |w* · x| is σ. Then, with probability 1 − δ, the perceptron achieves accuracy at least 1 − [∊ + α(1 + η/σ)] after O[n∊−2σ−2(ln 1/δ)] examples.
Cite
Text
Bylander. "Learning Linear Threshold Approximations Using Perceptrons." Neural Computation, 1995. doi:10.1162/NECO.1995.7.2.370Markdown
[Bylander. "Learning Linear Threshold Approximations Using Perceptrons." Neural Computation, 1995.](https://mlanthology.org/neco/1995/bylander1995neco-learning/) doi:10.1162/NECO.1995.7.2.370BibTeX
@article{bylander1995neco-learning,
title = {{Learning Linear Threshold Approximations Using Perceptrons}},
author = {Bylander, Tom},
journal = {Neural Computation},
year = {1995},
pages = {370-379},
doi = {10.1162/NECO.1995.7.2.370},
volume = {7},
url = {https://mlanthology.org/neco/1995/bylander1995neco-learning/}
}