Learning by Asymmetric Parallel Boltzmann Machines

Abstract

We consider the Little, Shaw, Vasudevan model as a parallel asymmetric Boltzmann machine, in the sense that we extend to this model the entropic learning rule first studied by Ackley, Hinton, and Sejnowski in the case of a sequentially activated network with symmetric synaptic matrix. The resulting Hebbian learning rule for the parallel asymmetric model draws the signal for the updating of synaptic weights from time averages of the discrepancy between expected and actual transitions along the past history of the network. As we work without the hypothesis of symmetry of the weights, we can include in our analysis also feedforward networks, for which the entropic learning rule turns out to be complementary to the error backpropagation rule, in that it “rewards the correct behavior” instead of “penalizing the wrong answers.”

Cite

Text

Apolloni and de Falco. "Learning by Asymmetric Parallel Boltzmann Machines." Neural Computation, 1991. doi:10.1162/NECO.1991.3.3.402

Markdown

[Apolloni and de Falco. "Learning by Asymmetric Parallel Boltzmann Machines." Neural Computation, 1991.](https://mlanthology.org/neco/1991/apolloni1991neco-learning/) doi:10.1162/NECO.1991.3.3.402

BibTeX

@article{apolloni1991neco-learning,
  title     = {{Learning by Asymmetric Parallel Boltzmann Machines}},
  author    = {Apolloni, Bruno and de Falco, Diego},
  journal   = {Neural Computation},
  year      = {1991},
  pages     = {402-408},
  doi       = {10.1162/NECO.1991.3.3.402},
  volume    = {3},
  url       = {https://mlanthology.org/neco/1991/apolloni1991neco-learning/}
}