Attractor Dynamics in Feedforward Neural Networks
Abstract
We study the probabilistic generative models parameterized by feedfor-ward neural networks. An attractor dynamics for probabilistic inference in these models is derived from a mean field approximation for large, layered sigmoidal networks. Fixed points of the dynamics correspond to solutions of the mean field equations, which relate the statistics of each unittothoseofits Markovblanket. We establish global convergence of the dynamics by providing a Lyapunov function and show that the dynamics generate the signals required for unsupervised learning. Our results for feedforward networks provide a counterpart to those of Cohen-Grossberg and Hopfield for symmetric networks.
Cite
Text
Saul and Jordan. "Attractor Dynamics in Feedforward Neural Networks." Neural Computation, 2000. doi:10.1162/089976600300015385Markdown
[Saul and Jordan. "Attractor Dynamics in Feedforward Neural Networks." Neural Computation, 2000.](https://mlanthology.org/neco/2000/saul2000neco-attractor/) doi:10.1162/089976600300015385BibTeX
@article{saul2000neco-attractor,
title = {{Attractor Dynamics in Feedforward Neural Networks}},
author = {Saul, Lawrence K. and Jordan, Michael I.},
journal = {Neural Computation},
year = {2000},
pages = {1313-1335},
doi = {10.1162/089976600300015385},
volume = {12},
url = {https://mlanthology.org/neco/2000/saul2000neco-attractor/}
}