The Power of Deeper Networks for Expressing Natural Functions

Abstract

It is well-known that neural networks are universal approximators, but that deeper networks tend in practice to be more powerful than shallower ones. We shed light on this by proving that the total number of neurons m required to approximate natural classes of multivariate polynomials of n variables grows only linearly with n for deep neural networks, but grows exponentially when merely a single hidden layer is allowed. We also provide evidence that when the number of hidden layers is increased from 1 to k, the neuron requirement grows exponentially not with n but with n^1/k, suggesting that the minimum number of layers required for practical expressibility grows only logarithmically with n.

Cite

Text

Rolnick and Tegmark. "The Power of Deeper Networks for Expressing Natural Functions." International Conference on Learning Representations, 2018.

Markdown

[Rolnick and Tegmark. "The Power of Deeper Networks for Expressing Natural Functions." International Conference on Learning Representations, 2018.](https://mlanthology.org/iclr/2018/rolnick2018iclr-power/)

BibTeX

@inproceedings{rolnick2018iclr-power,
  title     = {{The Power of Deeper Networks for Expressing Natural Functions}},
  author    = {Rolnick, David and Tegmark, Max},
  booktitle = {International Conference on Learning Representations},
  year      = {2018},
  url       = {https://mlanthology.org/iclr/2018/rolnick2018iclr-power/}
}