Complexity of Finite Precision Neural Network Classifier
Abstract
A rigorous analysis on the finite precision computational <)Spects of neural network as a pattern classifier via a probabilistic approach is presented. Even though there exist negative results on the capa(cid:173) bility of perceptron, we show the following positive results: Given n pattern vectors each represented by en bits where e > 1, that are uniformly distributed, with high probability the perceptron can perform all possible binary classifications of the patterns. More(cid:173) over, the resulting neural network requires a vanishingly small pro(cid:173) portion O(log n/n) of the memory that would be required for com(cid:173) plete storage of the patterns. Further, the perceptron algorithm takes O(n2) arithmetic operations with high probability, whereas other methods such as linear programming takes O(n3 .5 ) in the worst case. We also indicate some mathematical connections with VLSI circuit testing and the theory of random matrices.
Cite
Text
Dembo et al. "Complexity of Finite Precision Neural Network Classifier." Neural Information Processing Systems, 1989.Markdown
[Dembo et al. "Complexity of Finite Precision Neural Network Classifier." Neural Information Processing Systems, 1989.](https://mlanthology.org/neurips/1989/dembo1989neurips-complexity/)BibTeX
@inproceedings{dembo1989neurips-complexity,
title = {{Complexity of Finite Precision Neural Network Classifier}},
author = {Dembo, Amir and Siu, Kai-Yeung and Kailath, Thomas},
booktitle = {Neural Information Processing Systems},
year = {1989},
pages = {668-675},
url = {https://mlanthology.org/neurips/1989/dembo1989neurips-complexity/}
}