On Neuronal Capacity
Abstract
We define the capacity of a learning machine to be the logarithm of the number (or volume) of the functions it can implement. We review known results, and derive new results, estimating the capacity of several neuronal models: linear and polynomial threshold gates, linear and polynomial threshold gates with constrained weights (binary weights, positive weights), and ReLU neurons. We also derive capacity estimates and bounds for fully recurrent networks and layered feedforward networks.
Cite
Text
Baldi and Vershynin. "On Neuronal Capacity." Neural Information Processing Systems, 2018.Markdown
[Baldi and Vershynin. "On Neuronal Capacity." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/baldi2018neurips-neuronal/)BibTeX
@inproceedings{baldi2018neurips-neuronal,
title = {{On Neuronal Capacity}},
author = {Baldi, Pierre and Vershynin, Roman},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {7729-7738},
url = {https://mlanthology.org/neurips/2018/baldi2018neurips-neuronal/}
}