Learning to Learn with Feedback and Local Plasticity

Abstract

Developing effective biologically plausible learning rules for deep neural networks is important for advancing connections between deep learning and neuroscience. To date, local synaptic learning rules like those employed by the brain have failed to match the performance of backpropagation in deep networks. In this work, we employ meta-learning to discover networks that learn using feedback connections and local, biologically motivated learning rules. Importantly, the feedback connections are not tied to the feedforward weights, avoiding any biologically implausible weight transport. It can be shown mathematically that this approach has sufficient expressivity to approximate any online learning algorithm. Our experiments show that the meta-trained networks effectively use feedback connections to perform online credit assignment in multi-layer architectures. Moreover, we demonstrate empirically that this model outperforms a state-of-the-art gradient-based meta-learning algorithm for continual learning on regression and classification benchmarks. This approach represents a step toward biologically plausible learning mechanisms that can not only match gradient descent-based learning, but also overcome its limitations.

Cite

Text

Lindsey. "Learning to Learn with Feedback and Local Plasticity." NeurIPS 2019 Workshops: Neuro_AI, 2019.

Markdown

[Lindsey. "Learning to Learn with Feedback and Local Plasticity." NeurIPS 2019 Workshops: Neuro_AI, 2019.](https://mlanthology.org/neuripsw/2019/lindsey2019neuripsw-learning/)

BibTeX

@inproceedings{lindsey2019neuripsw-learning,
  title     = {{Learning to Learn with Feedback and Local Plasticity}},
  author    = {Lindsey, Jack},
  booktitle = {NeurIPS 2019 Workshops: Neuro_AI},
  year      = {2019},
  url       = {https://mlanthology.org/neuripsw/2019/lindsey2019neuripsw-learning/}
}