Deep Layer-Wise Networks Have Closed-Form Weights
Abstract
There is currently a debate within the neuroscience community over the likelihood of the brain performing backpropagation (BP). To better mimic the brain, training a network one layer at a time with only a "single forward pass" has been proposed as an alternative to bypass BP; we refer to these networks as "layer-wise" networks. We continue the work on layer-wise networks by answering two outstanding questions. First, do they have a closed-form solution? Second, how do we know when to stop adding more layers? This work proves that the "Kernel Mean Embedding" is the closed-form solution that achieves the network global optimum while driving these networks to converge towards a highly desirable kernel for classification; we call it the Neural Indicator Kernel.
Cite
Text
Tzu Wu et al. "Deep Layer-Wise Networks Have Closed-Form Weights." Artificial Intelligence and Statistics, 2022.Markdown
[Tzu Wu et al. "Deep Layer-Wise Networks Have Closed-Form Weights." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/tzuwu2022aistats-deep/)BibTeX
@inproceedings{tzuwu2022aistats-deep,
title = {{Deep Layer-Wise Networks Have Closed-Form Weights}},
author = {Tzu Wu, Chieh and Masoomi, Aria and Gretton, Arthur and Dy, Jennifer},
booktitle = {Artificial Intelligence and Statistics},
year = {2022},
pages = {188-225},
volume = {151},
url = {https://mlanthology.org/aistats/2022/tzuwu2022aistats-deep/}
}