Analysis of the Internal Representations in Neural Networks for Machine Intelligence

Abstract

The internal representation of the training patterns of multi-layer perceptrons was examined and we demonstrated that the connection weights between layers are effectively transforming the representation format of the information from one layer to another one in a meaningful way. The internal code, which can be in analog or binary form, is found to be dependent on a number of factors, including the choice of an appropriate representation of the training patterns, the similarities between the patterns as well as the network structure; i.e. the number of hidden layers and the number of hidden units in each layer.

Cite

Text

Chan. "Analysis of the Internal Representations in Neural Networks for Machine Intelligence." AAAI Conference on Artificial Intelligence, 1991.

Markdown

[Chan. "Analysis of the Internal Representations in Neural Networks for Machine Intelligence." AAAI Conference on Artificial Intelligence, 1991.](https://mlanthology.org/aaai/1991/chan1991aaai-analysis/)

BibTeX

@inproceedings{chan1991aaai-analysis,
  title     = {{Analysis of the Internal Representations in Neural Networks for Machine Intelligence}},
  author    = {Chan, Lai-Wan},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {1991},
  pages     = {578-583},
  url       = {https://mlanthology.org/aaai/1991/chan1991aaai-analysis/}
}