Robust Trainability of Single Neurons

Abstract

We investigate the problem of learning concepts by presenting labeled and randomly chosen training–examples to single neurons. It is well-known that linear halfspaces are learnable by the method of linear programming. The corresponding (Mc-Culloch-Pitts) neurons are therefore efficiently trainable to learn an unknown halfspace from examples. We want to analyze how fast the learning performance degrades when the representational power of the neuron is overstrained, i.e., if more complex concepts than just halfspaces are allowed. We show that a neuron cannot efficently find its probably almost optimal adjustment (unless RP = NP). If the weights and the threshold of the neuron have a fixed constant bound on their coding length, the situation is even worse: There is in general no polynomial time training method which bounds the resulting prediction error of the neuron by k.opt for a fixed constant k (unless RP = NP). Other variants of learning more complex concepts than halfspaces by single neurons are also investigated. We show that neither heuristical learning nor learning by sigmoidal neurons with a constant reject-rate is efficiently possible (unless RP = NP).

Cite

Text

Höffgen and Simon. "Robust Trainability of Single Neurons." Annual Conference on Computational Learning Theory, 1992. doi:10.1145/130385.130431

Markdown

[Höffgen and Simon. "Robust Trainability of Single Neurons." Annual Conference on Computational Learning Theory, 1992.](https://mlanthology.org/colt/1992/hoffgen1992colt-robust/) doi:10.1145/130385.130431

BibTeX

@inproceedings{hoffgen1992colt-robust,
  title     = {{Robust Trainability of Single Neurons}},
  author    = {Höffgen, Klaus-Uwe and Simon, Hans Ulrich},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {1992},
  pages     = {428-439},
  doi       = {10.1145/130385.130431},
  url       = {https://mlanthology.org/colt/1992/hoffgen1992colt-robust/}
}