Over-Complete Representations on Recurrent Neural Networks Can Support Persistent Percepts
Abstract
A striking aspect of cortical neural networks is the divergence of a relatively small number of input channels from the peripheral sensory apparatus into a large number of cortical neurons, an over-complete representation strategy. Cortical neurons are then connected by a sparse network of lateral synapses. Here we propose that such architecture may increase the persistence of the representation of an incoming stimulus, or a percept. We demonstrate that for a family of networks in which the receptive field of each neuron is re-expressed by its outgoing connections, a represented percept can remain constant despite changing activity. We term this choice of connectivity REceptive FIeld REcombination (REFIRE) networks. The sparse REFIRE network may serve as a high-dimensional integrator and a biologically plausible model of the local cortical circuit.
Cite
Text
Druckmann and Chklovskii. "Over-Complete Representations on Recurrent Neural Networks Can Support Persistent Percepts." Neural Information Processing Systems, 2010.Markdown
[Druckmann and Chklovskii. "Over-Complete Representations on Recurrent Neural Networks Can Support Persistent Percepts." Neural Information Processing Systems, 2010.](https://mlanthology.org/neurips/2010/druckmann2010neurips-overcomplete/)BibTeX
@inproceedings{druckmann2010neurips-overcomplete,
title = {{Over-Complete Representations on Recurrent Neural Networks Can Support Persistent Percepts}},
author = {Druckmann, Shaul and Chklovskii, Dmitri B.},
booktitle = {Neural Information Processing Systems},
year = {2010},
pages = {541-549},
url = {https://mlanthology.org/neurips/2010/druckmann2010neurips-overcomplete/}
}