Dyadic Learning in Recurrent and Feedforward Models
Abstract
From electrical to biological circuits, feedback plays a critical role in amplifying, dampening and stabilizing signals. In local activity difference based alternatives to backpropagation, feedback connections are used to propagate learning signals in deep neural networks. We propose a saddle-point based framework using dyadic (two-state) neurons for training a family of parameterized models, which include the symmetric Hopfield model, pure feedforward networks and a less explored skew-symmetric Hopfield variant. The resulting learning method reduces to equilibrium propagation (EP) for symmetric Hopfield models and to dual propagation (DP) for feedforward networks, while the skew-symmetric Hopfield setting yields a new method with desirable robustness properties. Experimentally we demonstrate that the new skew-symmetric Hopfield model performs on par with EP and DP in terms of the resulting model predictive performance, while exhibiting en- hanced robustness to input changes and strong feedback and is less inclined to neural saturation. We identify the fundamentally different types of feedback signals propagated in each model as the main cause of differences in robustness and saturation.
Cite
Text
Høier et al. "Dyadic Learning in Recurrent and Feedforward Models." NeurIPS 2024 Workshops: NeuroAI, 2024.Markdown
[Høier et al. "Dyadic Learning in Recurrent and Feedforward Models." NeurIPS 2024 Workshops: NeuroAI, 2024.](https://mlanthology.org/neuripsw/2024/hier2024neuripsw-dyadic-a/)BibTeX
@inproceedings{hier2024neuripsw-dyadic-a,
title = {{Dyadic Learning in Recurrent and Feedforward Models}},
author = {Høier, Rasmus and Kalinin, Kirill and Ernoult, Maxence and Zach, Christopher},
booktitle = {NeurIPS 2024 Workshops: NeuroAI},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/hier2024neuripsw-dyadic-a/}
}