Generalisation in Feedforward Networks
Abstract
We discuss a model of consistent learning with an additional re(cid:173) striction on the probability distribution of training samples, the target concept and hypothesis class. We show that the model pro(cid:173) vides a significant improvement on the upper bounds of sample complexity, i.e. the minimal number of random training samples allowing a selection of the hypothesis with a predefined accuracy and confidence. Further, we show that the model has the poten(cid:173) tial for providing a finite sample complexity even in the case of infinite VC-dimension as well as for a sample complexity below VC-dimension. This is achieved by linking sample complexity to an "average" number of implement able dichotomies of a training sample rather than the maximal size of a shattered sample, i.e. VC-dimension.
Cite
Text
Kowalczyk and Ferrá. "Generalisation in Feedforward Networks." Neural Information Processing Systems, 1994.Markdown
[Kowalczyk and Ferrá. "Generalisation in Feedforward Networks." Neural Information Processing Systems, 1994.](https://mlanthology.org/neurips/1994/kowalczyk1994neurips-generalisation/)BibTeX
@inproceedings{kowalczyk1994neurips-generalisation,
title = {{Generalisation in Feedforward Networks}},
author = {Kowalczyk, Adam and Ferrá, Herman L.},
booktitle = {Neural Information Processing Systems},
year = {1994},
pages = {215-222},
url = {https://mlanthology.org/neurips/1994/kowalczyk1994neurips-generalisation/}
}