Adjoint Operator Algorithms for Faster Learning in Dynamical Neural Networks
Abstract
A methodology for faster supervised learning in dynamical nonlin(cid:173) ear neural networks is presented. It exploits the concept of adjoint operntors to enable computation of changes in the network's re(cid:173) sponse due to perturbations in all system parameters, using the so(cid:173) lution of a single set of appropriately constructed linear equations. The lower bound on speedup per learning iteration over conven(cid:173) tional methods for calculating the neuromorphic energy gradient is O(N2), where N is the number of neurons in the network.
Cite
Text
Barhen et al. "Adjoint Operator Algorithms for Faster Learning in Dynamical Neural Networks." Neural Information Processing Systems, 1989.Markdown
[Barhen et al. "Adjoint Operator Algorithms for Faster Learning in Dynamical Neural Networks." Neural Information Processing Systems, 1989.](https://mlanthology.org/neurips/1989/barhen1989neurips-adjoint/)BibTeX
@inproceedings{barhen1989neurips-adjoint,
title = {{Adjoint Operator Algorithms for Faster Learning in Dynamical Neural Networks}},
author = {Barhen, Jacob and Toomarian, Nikzad Benny and Gulati, Sandeep},
booktitle = {Neural Information Processing Systems},
year = {1989},
pages = {498-508},
url = {https://mlanthology.org/neurips/1989/barhen1989neurips-adjoint/}
}