The VC Dimension and Pseudodimension of Two-Layer Neural Networks with Discrete Inputs
Abstract
We give upper bounds on the Vapnik-Chervonenkis dimension and pseudodimension of two-layer neural networks that use the standard sigmoid function or radial basis function and have inputs from −D, …,Dn. In Valiant's probably approximately correct (pac) learning framework for pattern classification, and in Haussler's generalization of this framework to nonlinear regression, the results imply that the number of training examples necessary for satisfactory learning performance grows no more rapidly than W log (WD), where W is the number of weights. The previous best bound for these networks was O(W4).
Cite
Text
Bartlett and Williamson. "The VC Dimension and Pseudodimension of Two-Layer Neural Networks with Discrete Inputs." Neural Computation, 1996. doi:10.1162/NECO.1996.8.3.625Markdown
[Bartlett and Williamson. "The VC Dimension and Pseudodimension of Two-Layer Neural Networks with Discrete Inputs." Neural Computation, 1996.](https://mlanthology.org/neco/1996/bartlett1996neco-vc/) doi:10.1162/NECO.1996.8.3.625BibTeX
@article{bartlett1996neco-vc,
title = {{The VC Dimension and Pseudodimension of Two-Layer Neural Networks with Discrete Inputs}},
author = {Bartlett, Peter L. and Williamson, Robert C.},
journal = {Neural Computation},
year = {1996},
pages = {625-628},
doi = {10.1162/NECO.1996.8.3.625},
volume = {8},
url = {https://mlanthology.org/neco/1996/bartlett1996neco-vc/}
}