Depth Separation for Neural Networks
Abstract
Let $f:\mathbb{S}^d-1\times \mathbb{S}^d-1\to\mathbb{S}$ be a function of the form $f(x,x’) = g(⟨x,x’⟩)$ for $g:[-1,1]\to \mathbb{R}$. We give a simple proof that shows that poly-size depth two neural networks with (exponentially) bounded weights cannot approximate $f$ whenever $g$ cannot be approximated by a low degree polynomial. Moreover, for many $g$’s, such as $g(x)=\sin(\pi d^3x)$, the number of neurons must be $2^Ω\left(d\log(d)\right)$. Furthermore, the result holds w.r.t. the uniform distribution on $\mathbb{S}^d-1\times \mathbb{S}^d-1$. As many functions of the above form can be well approximated by poly-size depth three networks with poly-bounded weights, this establishes a separation between depth two and depth three networks w.r.t. the uniform distribution on $\mathbb{S}^d-1\times \mathbb{S}^d-1$.
Cite
Text
Daniely. "Depth Separation for Neural Networks." Proceedings of the 2017 Conference on Learning Theory, 2017.Markdown
[Daniely. "Depth Separation for Neural Networks." Proceedings of the 2017 Conference on Learning Theory, 2017.](https://mlanthology.org/colt/2017/daniely2017colt-depth/)BibTeX
@inproceedings{daniely2017colt-depth,
title = {{Depth Separation for Neural Networks}},
author = {Daniely, Amit},
booktitle = {Proceedings of the 2017 Conference on Learning Theory},
year = {2017},
pages = {690-696},
volume = {65},
url = {https://mlanthology.org/colt/2017/daniely2017colt-depth/}
}