Target Propagation via Regularized Inversion for Recurrent Neural Networks
Abstract
Target Propagation (TP) algorithms compute targets instead of gradients along neural networks and propagate them backward in a way that is similar to yet different than gradient back-propagation (BP). The idea initially appeared as a perturbative alternative to BP that may improve gradient evaluation accuracy when training multi-layer neural networks (LeCun, 1985) and has gained popularity as a biologically plausible counterpart of BP. However, there have been many variations of TP, and a simple version of TP still remains worthwhile. Revisiting the insights of LeCun (1985) and Lee et al (2015), we present a simple version of TP based on regularized inversions of layers of recurrent neural networks. The proposed TP algorithm is easily implementable in a differentiable programming framework. We illustrate the algorithm with recurrent neural networks on long sequences in various sequence modeling problems and delineate the regimes in which the computational complexity of TP can be attractive compared to BP.
Cite
Text
Roulet and Harchaoui. "Target Propagation via Regularized Inversion for Recurrent Neural Networks." Transactions on Machine Learning Research, 2023.Markdown
[Roulet and Harchaoui. "Target Propagation via Regularized Inversion for Recurrent Neural Networks." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/roulet2023tmlr-target/)BibTeX
@article{roulet2023tmlr-target,
title = {{Target Propagation via Regularized Inversion for Recurrent Neural Networks}},
author = {Roulet, Vincent and Harchaoui, Zaid},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/roulet2023tmlr-target/}
}