Unfolding Recurrence by Green’s Functions for Optimized Reservoir Computing
Abstract
Cortical networks are strongly recurrent, and neurons have intrinsic temporal dynamics. This sets them apart from deep feed-forward networks. Despite the tremendous progress in the application of deep feed-forward networks and their theoretical understanding, it remains unclear how the interplay of recurrence and non-linearities in recurrent cortical networks contributes to their function. The purpose of this work is to present a solvable recurrent network model that links to feed forward networks. By perturbative methods we transform the time-continuous, recurrent dynamics into an effective feed-forward structure of linear and non-linear temporal kernels. The resulting analytical expressions allow us to build optimal time-series classifiers from random reservoir networks. Firstly, this allows us to optimize not only the readout vectors, but also the input projection, demonstrating a strong potential performance gain. Secondly, the analysis exposes how the second order stimulus statistics is a crucial element that interacts with the non-linearity of the dynamics and boosts performance.
Cite
Text
Nestler et al. "Unfolding Recurrence by Green’s Functions for Optimized Reservoir Computing." Neural Information Processing Systems, 2020.Markdown
[Nestler et al. "Unfolding Recurrence by Green’s Functions for Optimized Reservoir Computing." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/nestler2020neurips-unfolding/)BibTeX
@inproceedings{nestler2020neurips-unfolding,
title = {{Unfolding Recurrence by Green’s Functions for Optimized Reservoir Computing}},
author = {Nestler, Sandra and Keup, Christian and Dahmen, David and Gilson, Matthieu and Rauhut, Holger and Helias, Moritz},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/nestler2020neurips-unfolding/}
}