Learning to Solve the Credit Assignment Problem
Abstract
Backpropagation is driving today's artificial neural networks. However, despite extensive research, it remains unclear if the brain implements this algorithm. Among neuroscientists, reinforcement learning (RL) algorithms are often seen as a realistic alternative. However, the convergence rate of such learning scales poorly with the number of involved neurons. Here we propose a hybrid learning approach, in which each neuron uses an RL-type strategy to learn how to approximate the gradients that backpropagation would provide. We show that our approach learns to approximate the gradient, and can match the performance of gradient-based learning on fully connected and convolutional networks. Learning feedback weights provides a biologically plausible mechanism of achieving good performance, without the need for precise, pre-specified learning rules.
Cite
Text
Lansdell et al. "Learning to Solve the Credit Assignment Problem." NeurIPS 2019 Workshops: Neuro_AI, 2019.Markdown
[Lansdell et al. "Learning to Solve the Credit Assignment Problem." NeurIPS 2019 Workshops: Neuro_AI, 2019.](https://mlanthology.org/neuripsw/2019/lansdell2019neuripsw-learning/)BibTeX
@inproceedings{lansdell2019neuripsw-learning,
title = {{Learning to Solve the Credit Assignment Problem}},
author = {Lansdell, Benjamin James and Prakash, Prashanth and Kording, Konrad Paul},
booktitle = {NeurIPS 2019 Workshops: Neuro_AI},
year = {2019},
url = {https://mlanthology.org/neuripsw/2019/lansdell2019neuripsw-learning/}
}