ResNet with One-Neuron Hidden Layers Is a Universal Approximator

Abstract

We demonstrate that a very deep ResNet with stacked modules that have one neuron per hidden layer and ReLU activation functions can uniformly approximate any Lebesgue integrable function in d dimensions, i.e. \ell_1(R^d). Due to the identity mapping inherent to ResNets, our network has alternating layers of dimension one and d. This stands in sharp contrast to fully connected networks, which are not universal approximators if their width is the input dimension d [21,11]. Hence, our result implies an increase in representational power for narrow deep networks by the ResNet architecture.

Cite

Text

Lin and Jegelka. "ResNet with One-Neuron Hidden Layers Is a Universal Approximator." Neural Information Processing Systems, 2018.

Markdown

[Lin and Jegelka. "ResNet with One-Neuron Hidden Layers Is a Universal Approximator." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/lin2018neurips-resnet/)

BibTeX

@inproceedings{lin2018neurips-resnet,
  title     = {{ResNet with One-Neuron Hidden Layers Is a Universal Approximator}},
  author    = {Lin, Hongzhou and Jegelka, Stefanie},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {6169-6178},
  url       = {https://mlanthology.org/neurips/2018/lin2018neurips-resnet/}
}