Constant-Time Loading of Shallow 1-Dimensional Networks
Abstract
The complexity of learning in shallow I-Dimensional neural networks has been shown elsewhere to be linear in the size of the network. However, when the network has a huge number of units (as cortex has) even linear time might be unacceptable. Furthermore, the algorithm that was given to achieve this time was based on a single serial processor and was biologically implausible. In this work we consider the more natural parallel model of processing and demonstrate an expected-time complexity that is constant (i.e. dependent of the size of the network). This holds even when inter-node communication channels are short and local, thus adhering to more bio(cid:173) logical and VLSI constraints.
Cite
Text
Judd. "Constant-Time Loading of Shallow 1-Dimensional Networks." Neural Information Processing Systems, 1991.Markdown
[Judd. "Constant-Time Loading of Shallow 1-Dimensional Networks." Neural Information Processing Systems, 1991.](https://mlanthology.org/neurips/1991/judd1991neurips-constanttime/)BibTeX
@inproceedings{judd1991neurips-constanttime,
title = {{Constant-Time Loading of Shallow 1-Dimensional Networks}},
author = {Judd, Stephen},
booktitle = {Neural Information Processing Systems},
year = {1991},
pages = {863-870},
url = {https://mlanthology.org/neurips/1991/judd1991neurips-constanttime/}
}