Why Some Feedforward Networks Cannot Learn Some Polynomials
Abstract
It seems natural to test feedforward networks on deterministic functions. Yet, some simple functions, notably polynomials, present some difficult problems for approximation by feedforward networks. The estimated parameters become unbounded and fail to follow any unique pattern. Furthermore, as the fit to the specified functions becomes closer, numerical problems may develop in the algorithm. This paper explains why these problems occur for polynomials of order less than or equal to the number of hidden units of a feedforward network. We show that other examples occur for functions mathematically related to the network's squashing function. These difficulties do not indicate problems with the training algorithm, but occur as an inherent consequence of the role of the connection weights in feedforward networks.
Cite
Text
Cardell et al. "Why Some Feedforward Networks Cannot Learn Some Polynomials." Neural Computation, 1994. doi:10.1162/NECO.1994.6.4.761Markdown
[Cardell et al. "Why Some Feedforward Networks Cannot Learn Some Polynomials." Neural Computation, 1994.](https://mlanthology.org/neco/1994/cardell1994neco-some/) doi:10.1162/NECO.1994.6.4.761BibTeX
@article{cardell1994neco-some,
title = {{Why Some Feedforward Networks Cannot Learn Some Polynomials}},
author = {Cardell, N. Scott and Joerding, Wayne H. and Li, Ying},
journal = {Neural Computation},
year = {1994},
pages = {761-766},
doi = {10.1162/NECO.1994.6.4.761},
volume = {6},
url = {https://mlanthology.org/neco/1994/cardell1994neco-some/}
}