On the Distribution and Convergence of Feature Space in Self-Organizing Maps

Abstract

In this paper an analysis of the statistical and the convergence properties of Kohonen's self-organizing map of any dimension is presented. Every feature in the map is considered as a sum of a number of random variables. We extend the Central Limit Theorem to a particular case, which is then applied to prove that the feature space during learning tends to multiple gaussian distributed stochastic processes, which will eventually converge in the mean-square sense to the probabilistic centers of input subsets to form a quantization mapping with a minimum mean squared distortion either globally or locally. The diminishing effect, as training progresses, of the initial states on the value of the feature map is also shown.

Cite

Text

Yin and Allinson. "On the Distribution and Convergence of Feature Space in Self-Organizing Maps." Neural Computation, 1995. doi:10.1162/NECO.1995.7.6.1178

Markdown

[Yin and Allinson. "On the Distribution and Convergence of Feature Space in Self-Organizing Maps." Neural Computation, 1995.](https://mlanthology.org/neco/1995/yin1995neco-distribution/) doi:10.1162/NECO.1995.7.6.1178

BibTeX

@article{yin1995neco-distribution,
  title     = {{On the Distribution and Convergence of Feature Space in Self-Organizing Maps}},
  author    = {Yin, Hujun and Allinson, Nigel M.},
  journal   = {Neural Computation},
  year      = {1995},
  pages     = {1178-1187},
  doi       = {10.1162/NECO.1995.7.6.1178},
  volume    = {7},
  url       = {https://mlanthology.org/neco/1995/yin1995neco-distribution/}
}