Preintegration Lateral Inhibition Enhances Unsupervised Learning

Abstract

A large and influential class of neural network architectures uses postintegration lateral inhibition as a mechanism for competition. We argue that these algorithms are computationally deficient in that they fail to generate, or learn, appropriate perceptual representations under certain circumstances. An alternative neural network architecture is presented here in which nodes compete for the right to receive inputs rather than for the right to generate outputs. This form of competition, implemented through preintegration lateral inhibition, does provide appropriate coding properties and can be used to learn such representations efficiently. Furthermore, this architecture is consistent with both neuroanatomical and neurophysiological data. We thus argue that preintegration lateral inhibition has computational advantages over conventional neural network architectures while remaining equally biologically plausible.

Cite

Text

Spratling and Johnson. "Preintegration Lateral Inhibition Enhances Unsupervised Learning." Neural Computation, 2002. doi:10.1162/089976602320264033

Markdown

[Spratling and Johnson. "Preintegration Lateral Inhibition Enhances Unsupervised Learning." Neural Computation, 2002.](https://mlanthology.org/neco/2002/spratling2002neco-preintegration/) doi:10.1162/089976602320264033

BibTeX

@article{spratling2002neco-preintegration,
  title     = {{Preintegration Lateral Inhibition Enhances Unsupervised Learning}},
  author    = {Spratling, Michael W. and Johnson, M. H.},
  journal   = {Neural Computation},
  year      = {2002},
  pages     = {2157-2179},
  doi       = {10.1162/089976602320264033},
  volume    = {14},
  url       = {https://mlanthology.org/neco/2002/spratling2002neco-preintegration/}
}