Universal Approximation with Deep Narrow Networks

Abstract

The classical Universal Approximation Theorem holds for neural networks of arbitrary width and bounded depth. Here we consider the natural ‘dual’ scenario for networks of bounded width and arbitrary depth. Precisely, let $n$ be the number of inputs neurons, $m$ be the number of output neurons, and let $\rho$ be any nonaffine continuous function, with a continuous nonzero derivative at some point. Then we show that the class of neural networks of arbitrary depth, width $n + m + 2$, and activation function $\rho$, is dense in $C(K; \mathbb{R}^m)$ for $K \subseteq \mathbb{R}^n$ with $K$ compact. This covers every activation function possible to use in practice, and also includes polynomial activation functions, which is unlike the classical version of the theorem, and provides a qualitative difference between deep narrow networks and shallow wide networks. We then consider several extensions of this result. In particular we consider nowhere differentiable activation functions, density in noncompact domains with respect to the $L^p$-norm, and how the width may be reduced to just $n + m + 1$ for ‘most’ activation functions.

Cite

Text

Kidger and Lyons. "Universal Approximation with Deep Narrow Networks." Conference on Learning Theory, 2020.

Markdown

[Kidger and Lyons. "Universal Approximation with Deep Narrow Networks." Conference on Learning Theory, 2020.](https://mlanthology.org/colt/2020/kidger2020colt-universal/)

BibTeX

@inproceedings{kidger2020colt-universal,
  title     = {{Universal Approximation with Deep Narrow Networks}},
  author    = {Kidger, Patrick and Lyons, Terry},
  booktitle = {Conference on Learning Theory},
  year      = {2020},
  pages     = {2306-2327},
  volume    = {125},
  url       = {https://mlanthology.org/colt/2020/kidger2020colt-universal/}
}