Sharp Representation Theorems for ReLU Networks with Precise Dependence on Depth
Abstract
We prove dimension free representation results for neural networks with D ReLU layers under square loss for a class of functions G_D defined in the paper. These results capture the precise benefits of depth in the following sense:
Cite
Text
Bresler and Nagaraj. "Sharp Representation Theorems for ReLU Networks with Precise Dependence on Depth." Neural Information Processing Systems, 2020.Markdown
[Bresler and Nagaraj. "Sharp Representation Theorems for ReLU Networks with Precise Dependence on Depth." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/bresler2020neurips-sharp/)BibTeX
@inproceedings{bresler2020neurips-sharp,
title = {{Sharp Representation Theorems for ReLU Networks with Precise Dependence on Depth}},
author = {Bresler, Guy and Nagaraj, Dheeraj},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/bresler2020neurips-sharp/}
}