MLP Can Provably Generalize Much Better than VC-Bounds Indicate
Abstract
Results of a study of the worst case learning curves for a partic(cid:173) ular class of probability distribution on input space to MLP with hard threshold hidden units are presented. It is shown in partic(cid:173) ular, that in the thermodynamic limit for scaling by the number of connections to the first hidden layer, although the true learning curve behaves as ~ a-I for a ~ 1, its VC-dimension based bound is trivial (= 1) and its VC-entropy bound is trivial for a ::; 6.2. It is also shown that bounds following the true learning curve can be derived from a formalism based on the density of error patterns.
Cite
Text
Kowalczyk and Ferrá. "MLP Can Provably Generalize Much Better than VC-Bounds Indicate." Neural Information Processing Systems, 1996.Markdown
[Kowalczyk and Ferrá. "MLP Can Provably Generalize Much Better than VC-Bounds Indicate." Neural Information Processing Systems, 1996.](https://mlanthology.org/neurips/1996/kowalczyk1996neurips-mlp/)BibTeX
@inproceedings{kowalczyk1996neurips-mlp,
title = {{MLP Can Provably Generalize Much Better than VC-Bounds Indicate}},
author = {Kowalczyk, Adam and Ferrá, Herman L.},
booktitle = {Neural Information Processing Systems},
year = {1996},
pages = {190-196},
url = {https://mlanthology.org/neurips/1996/kowalczyk1996neurips-mlp/}
}