On Learning Noisy Threshold Functions with Finite Precision Weights

Abstract

We address the issue of the precision required by an N-input threshold element in order to implement a linearly separable mapping. In distinction with previous work we require only the ability to correctly implement the mapping of P randomly chosen training examples, as opposed to the complete boolean mapping. Our results are obtained within the statistical mechanics approach and are thus average case results as opposed to the worst case analyses in the computational learning theory literature. We show that as long as the fraction P/N is finite, then with probability close to 1 as N →∞ a finite number of bits suffice to implement the mapping. This should be compared to the worst case analysis which requires O(N log N) bits. We also calculate the ability of the constrained network to predict novel examples and compare their predictions to those of an unconstrained network. Finally, we address the issue of the performance of the finite-precision network in the face of noisy training examples.

Cite

Text

Meir and Fontanari. "On Learning Noisy Threshold Functions with Finite Precision Weights." Annual Conference on Computational Learning Theory, 1992. doi:10.1145/130385.130416

Markdown

[Meir and Fontanari. "On Learning Noisy Threshold Functions with Finite Precision Weights." Annual Conference on Computational Learning Theory, 1992.](https://mlanthology.org/colt/1992/meir1992colt-learning/) doi:10.1145/130385.130416

BibTeX

@inproceedings{meir1992colt-learning,
  title     = {{On Learning Noisy Threshold Functions with Finite Precision Weights}},
  author    = {Meir, Ronny and Fontanari, José F.},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {1992},
  pages     = {280-286},
  doi       = {10.1145/130385.130416},
  url       = {https://mlanthology.org/colt/1992/meir1992colt-learning/}
}