Maximising Sensitivity in a Spiking Network
Abstract
We use unsupervised probabilistic machine learning ideas to try to ex- plain the kinds of learning observed in real neurons, the goal being to connect abstract principles of self-organisation to known biophysi- cal processes. For example, we would like to explain Spike Timing- Dependent Plasticity (see [5,6] and Figure 3A), in terms of information theory. Starting out, we explore the optimisation of a network sensitiv- ity measure related to maximising the mutual information between input spike timings and output spike timings. Our derivations are analogous to those in ICA, except that the sensitivity of output timings to input tim- ings is maximised, rather than the sensitivity of output ‘firing rates’ to inputs. ICA and related approaches have been successful in explaining the learning of many properties of early visual receptive fields in rate cod- ing models, and we are hoping for similar gains in understanding of spike coding in networks, and how this is supported, in principled probabilistic ways, by cellular biophysical processes. For now, in our initial simula- tions, we show that our derived rule can learn synaptic weights which can unmix, or demultiplex, mixed spike trains. That is, it can recover inde- pendent point processes embedded in distributed correlated input spike trains, using an adaptive single-layer feedforward spiking network.
Cite
Text
Bell and Parra. "Maximising Sensitivity in a Spiking Network." Neural Information Processing Systems, 2004.Markdown
[Bell and Parra. "Maximising Sensitivity in a Spiking Network." Neural Information Processing Systems, 2004.](https://mlanthology.org/neurips/2004/bell2004neurips-maximising/)BibTeX
@inproceedings{bell2004neurips-maximising,
title = {{Maximising Sensitivity in a Spiking Network}},
author = {Bell, Anthony J. and Parra, Lucas C.},
booktitle = {Neural Information Processing Systems},
year = {2004},
pages = {121-128},
url = {https://mlanthology.org/neurips/2004/bell2004neurips-maximising/}
}