Bounds on the Approximation Power of Feedforward Neural Networks
Abstract
The approximation power of general feedforward neural networks with piecewise linear activation functions is investigated. First, lower bounds on the size of a network are established in terms of the approximation error and network depth and width. These bounds improve upon state-of-the-art bounds for certain classes of functions, such as strongly convex functions. Second, an upper bound is established on the difference of two neural networks with identical weights but different activation functions.
Cite
Text
Mehrabi et al. "Bounds on the Approximation Power of Feedforward Neural Networks." International Conference on Machine Learning, 2018.Markdown
[Mehrabi et al. "Bounds on the Approximation Power of Feedforward Neural Networks." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/mehrabi2018icml-bounds/)BibTeX
@inproceedings{mehrabi2018icml-bounds,
title = {{Bounds on the Approximation Power of Feedforward Neural Networks}},
author = {Mehrabi, Mohammad and Tchamkerten, Aslan and Yousefi, Mansoor},
booktitle = {International Conference on Machine Learning},
year = {2018},
pages = {3453-3461},
volume = {80},
url = {https://mlanthology.org/icml/2018/mehrabi2018icml-bounds/}
}