Size of Multilayer Networks for Exact Learning: Analytic Approach

Abstract

This article presents a new result about the size of a multilayer neural network computing real outputs for exact learning of a finite set of real samples. The architecture of the network is feedforward, with one hidden layer and several outputs. Starting from a fixed training set, we consider the network as a function of its weights. We derive, for a wide family of transfer functions, a lower and an upper bound on the number of hidden units for exact learning, given the size of the dataset and the dimensions of the input and output spaces.

Cite

Text

Elisseeff and Paugam-Moisy. "Size of Multilayer Networks for Exact Learning: Analytic Approach." Neural Information Processing Systems, 1996.

Markdown

[Elisseeff and Paugam-Moisy. "Size of Multilayer Networks for Exact Learning: Analytic Approach." Neural Information Processing Systems, 1996.](https://mlanthology.org/neurips/1996/elisseeff1996neurips-size/)

BibTeX

@inproceedings{elisseeff1996neurips-size,
  title     = {{Size of Multilayer Networks for Exact Learning: Analytic Approach}},
  author    = {Elisseeff, André and Paugam-Moisy, Hélène},
  booktitle = {Neural Information Processing Systems},
  year      = {1996},
  pages     = {162-168},
  url       = {https://mlanthology.org/neurips/1996/elisseeff1996neurips-size/}
}