Threshold Network Learning in the Presence of Equivalences

Abstract

This paper applies the theory of Probably Approximately Correct (PAC) learning to multiple output feedforward threshold networks in which the weights conform to certain equivalences. It is shown that the sample size for reliable learning can be bounded above by a formula similar to that required for single output networks with no equivalences. The best previ(cid:173) ously obtained bounds are improved for all cases.

Cite

Text

Shawe-Taylor. "Threshold Network Learning in the Presence of Equivalences." Neural Information Processing Systems, 1991.

Markdown

[Shawe-Taylor. "Threshold Network Learning in the Presence of Equivalences." Neural Information Processing Systems, 1991.](https://mlanthology.org/neurips/1991/shawetaylor1991neurips-threshold/)

BibTeX

@inproceedings{shawetaylor1991neurips-threshold,
  title     = {{Threshold Network Learning in the Presence of Equivalences}},
  author    = {Shawe-Taylor, John},
  booktitle = {Neural Information Processing Systems},
  year      = {1991},
  pages     = {879-886},
  url       = {https://mlanthology.org/neurips/1991/shawetaylor1991neurips-threshold/}
}