Deep Learning Without Weight Transport
Abstract
Current algorithms for deep learning probably cannot run in the brain because they rely on weight transport, where forward-path neurons transmit their synaptic weights to a feedback path, in a way that is likely impossible biologically. An algorithm called feedback alignment achieves deep learning without weight transport by using random feedback weights, but it performs poorly on hard visual-recognition tasks. Here we describe two mechanisms — a neural circuit called a weight mirror and a modification of an algorithm proposed by Kolen and Pollack in 1994 — both of which let the feedback path learn appropriate synaptic weights quickly and accurately even in large networks, without weight transport or complex wiring. Tested on the ImageNet visual-recognition task, these mechanisms outperform both feedback alignment and the newer sign-symmetry method, and nearly match backprop, the standard algorithm of deep learning, which uses weight transport.
Cite
Text
Akrout et al. "Deep Learning Without Weight Transport." Neural Information Processing Systems, 2019.Markdown
[Akrout et al. "Deep Learning Without Weight Transport." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/akrout2019neurips-deep/)BibTeX
@inproceedings{akrout2019neurips-deep,
title = {{Deep Learning Without Weight Transport}},
author = {Akrout, Mohamed and Wilson, Collin and Humphreys, Peter and Lillicrap, Timothy and Tweed, Douglas B},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {976-984},
url = {https://mlanthology.org/neurips/2019/akrout2019neurips-deep/}
}