The Faulty Behavior of Feedforward Neural Networks with Hard-Limiting Activation Fuction

Abstract

With the progress in hardware implementation of artificial neural networks, the ability to analyze their faulty behavior has become increasingly important to their diagnosis, repair, reconfiguration, and reliable application. The behavior of feedforward neural networks with hard limiting activation function under stuck-at faults is studied in this article. It is shown that the stuck-at-M faults have a larger effect on the network's performance than the mixed stuck-at faults, which in turn have a larger effect than that of stuck-at-0 faults. Furthermore, the fault-tolerant ability of the network decreases with the increase of its size for the same percentage of faulty interconnections. The results of our analysis are validated by Monte-Carlo simulations.

Cite

Text

Tian et al. "The Faulty Behavior of Feedforward Neural Networks with Hard-Limiting Activation Fuction." Neural Computation, 1997. doi:10.1162/NECO.1997.9.5.1109

Markdown

[Tian et al. "The Faulty Behavior of Feedforward Neural Networks with Hard-Limiting Activation Fuction." Neural Computation, 1997.](https://mlanthology.org/neco/1997/tian1997neco-faulty/) doi:10.1162/NECO.1997.9.5.1109

BibTeX

@article{tian1997neco-faulty,
  title     = {{The Faulty Behavior of Feedforward Neural Networks with Hard-Limiting Activation Fuction}},
  author    = {Tian, Zhiyu and Lin, Ting-Ting Y. and Yang, Shiyuan and Tong, Shibai},
  journal   = {Neural Computation},
  year      = {1997},
  pages     = {1109-1126},
  doi       = {10.1162/NECO.1997.9.5.1109},
  volume    = {9},
  url       = {https://mlanthology.org/neco/1997/tian1997neco-faulty/}
}