On the Optimality of Incremental Neural Network Algorithms

Abstract

We study the approximation of functions by two-layer feedforward neu(cid:173) ral networks, focusing on incremental algorithms which greedily add units, estimating single unit parameters at each stage. As opposed to standard algorithms for fixed architectures, the optimization at each stage is performed over a small number of parameters, mitigating many of the difficult numerical problems inherent in high-dimensional non-linear op(cid:173) timization. We establish upper bounds on the error incurred by the al(cid:173) gorithm, when approximating functions from the Sobolev class, thereby extending previous results which only provided rates of convergence for functions in certain convex hulls of functional spaces. By comparing our results to recently derived lower bounds, we show that the greedy algo(cid:173) rithms are nearly optimal. Combined with estimation error results for greedy algorithms, a strong case can be made for this type of approach.

Cite

Text

Meir and Maiorov. "On the Optimality of Incremental Neural Network Algorithms." Neural Information Processing Systems, 1998.

Markdown

[Meir and Maiorov. "On the Optimality of Incremental Neural Network Algorithms." Neural Information Processing Systems, 1998.](https://mlanthology.org/neurips/1998/meir1998neurips-optimality/)

BibTeX

@inproceedings{meir1998neurips-optimality,
  title     = {{On the Optimality of Incremental Neural Network Algorithms}},
  author    = {Meir, Ron and Maiorov, Vitaly},
  booktitle = {Neural Information Processing Systems},
  year      = {1998},
  pages     = {295-301},
  url       = {https://mlanthology.org/neurips/1998/meir1998neurips-optimality/}
}