An Efficient Implementation of the Back-Propagation Algorithm on the Connection Machine CM-2

Abstract

In this paper, we present a novel implementation of the widely used Back-propagation neural net learning algorithm on the Connection Machine CM-2 - a general purpose, massively parallel computer with a hypercube topology. This implementation runs at about 180 million interconnections per second (IPS) on a 64K processor CM- 2. The main interprocessor communication operation used is 2D nearest neighbor communication. The techniques developed here can be easily extended to implement other algorithms for layered neural nets on the CM-2, or on other massively parallel computers which have 2D or higher degree connections among their processors.

Cite

Text

Zhang et al. "An Efficient Implementation of the Back-Propagation Algorithm on the Connection Machine CM-2." Neural Information Processing Systems, 1989.

Markdown

[Zhang et al. "An Efficient Implementation of the Back-Propagation Algorithm on the Connection Machine CM-2." Neural Information Processing Systems, 1989.](https://mlanthology.org/neurips/1989/zhang1989neurips-efficient/)

BibTeX

@inproceedings{zhang1989neurips-efficient,
  title     = {{An Efficient Implementation of the Back-Propagation Algorithm on the Connection Machine CM-2}},
  author    = {Zhang, Xiru and McKenna, Michael and Mesirov, Jill P. and Waltz, David L.},
  booktitle = {Neural Information Processing Systems},
  year      = {1989},
  pages     = {801-809},
  url       = {https://mlanthology.org/neurips/1989/zhang1989neurips-efficient/}
}