AFRNN: Stable RNN with Top Down Feedback and Antisymmetry
Abstract
Recurrent Neural Networks are an integral part of modern machine learning. They are good at performing tasks on sequential data. However, long sequences are still a problem for those models due to the well-known exploding/vanishing gradient problem. In this work, we build on recent approaches to interpreting the gradient problem as instability of the underlying dynamical system. We extend previous approaches to systems with top-down feedback, which is abundant in biological neural networks. We prove that the resulting system is stable for arbitrary depth and width and confirm this empirically. We further show that its performance is on par with LSTM and related approaches on standard benchmarks.
Cite
Text
Schwabe et al. "AFRNN: Stable RNN with Top Down Feedback and Antisymmetry." Proceedings of The 14th Asian Conference on Machine Learning, 2022.Markdown
[Schwabe et al. "AFRNN: Stable RNN with Top Down Feedback and Antisymmetry." Proceedings of The 14th Asian Conference on Machine Learning, 2022.](https://mlanthology.org/acml/2022/schwabe2022acml-afrnn/)BibTeX
@inproceedings{schwabe2022acml-afrnn,
title = {{AFRNN: Stable RNN with Top Down Feedback and Antisymmetry}},
author = {Schwabe, Tim and Glasmachers, Tobias and Acosta, Maribel},
booktitle = {Proceedings of The 14th Asian Conference on Machine Learning},
year = {2022},
pages = {880-894},
volume = {189},
url = {https://mlanthology.org/acml/2022/schwabe2022acml-afrnn/}
}