Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures
Abstract
Despite being the workhorse of deep learning, the backpropagation algorithm is no panacea. It enforces sequential layer updates, thus preventing efficient parallelization of the training process. Furthermore, its biological plausibility is being challenged. Alternative schemes have been devised; yet, under the constraint of synaptic asymmetry, none have scaled to modern deep learning tasks and architectures. Here, we challenge this perspective, and study the applicability of Direct Feedback Alignment (DFA) to neural view synthesis, recommender systems, geometric learning, and natural language processing. In contrast with previous studies limited to computer vision tasks, our findings show that it successfully trains a large range of state-of-the-art deep learning architectures, with performance close to fine-tuned backpropagation. When a larger gap between DFA and backpropagation exists, like in Transformers, we attribute this to a need to rethink common practices for large and complex architectures. At variance with common beliefs, our work supports that challenging tasks can be tackled in the absence of weight transport.
Cite
Text
Launay et al. "Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures." Neural Information Processing Systems, 2020.Markdown
[Launay et al. "Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/launay2020neurips-direct/)BibTeX
@inproceedings{launay2020neurips-direct,
title = {{Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures}},
author = {Launay, Julien and Poli, Iacopo and Boniface, François and Krzakala, Florent},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/launay2020neurips-direct/}
}