Eligibility Traces Provide a Data-Inspired Alternative to Backpropagation Through Time

Abstract

Learning in recurrent neural networks (RNNs) is most often implemented by gradient descent using backpropagation through time (BPTT), but BPTT does not model accurately how the brain learns. Instead, many experimental results on synaptic plasticity can be summarized as three-factor learning rules involving eligibility traces of the local neural activity and a third factor. We present here eligibility propagation (e-prop), a new factorization of the loss gradients in RNNs that fits the framework of three factor learning rules when derived for biophysical spiking neuron models. When tested on the TIMIT speech recognition benchmark, it is competitive with BPTT both for training artificial LSTM networks and spiking RNNs. Further analysis suggests that the diversity of learning signals and the consideration of slow internal neural dynamics are decisive to the learning efficiency of e-prop.

Cite

Text

Bellec et al. "Eligibility Traces Provide a Data-Inspired Alternative to Backpropagation Through Time." NeurIPS 2019 Workshops: Neuro_AI, 2019.

Markdown

[Bellec et al. "Eligibility Traces Provide a Data-Inspired Alternative to Backpropagation Through Time." NeurIPS 2019 Workshops: Neuro_AI, 2019.](https://mlanthology.org/neuripsw/2019/bellec2019neuripsw-eligibility/)

BibTeX

@inproceedings{bellec2019neuripsw-eligibility,
  title     = {{Eligibility Traces Provide a Data-Inspired Alternative to Backpropagation Through Time}},
  author    = {Bellec, Guillaume and Scherr, Franz and Hajek, Elias and Salaj, Darjan and Subramoney, Anand and Legenstein, Robert and Maass, Wolfgang},
  booktitle = {NeurIPS 2019 Workshops: Neuro_AI},
  year      = {2019},
  url       = {https://mlanthology.org/neuripsw/2019/bellec2019neuripsw-eligibility/}
}