A Note on Learning Vector Quantization

Abstract

Vector Quantization is useful for data compression. Competitive Learn(cid:173) ing which minimizes reconstruction error is an appropriate algorithm for vector quantization of unlabelled data. Vector quantization of labelled data for classification has a different objective, to minimize the number of misclassifications, and a different algorithm is appropriate. We show that a variant of Kohonen's LVQ2.1 algorithm can be seen as a multi(cid:173) class extension of an algorithm which in a restricted 2 class case can be proven to converge to the Bayes optimal classification boundary. We compare the performance of the LVQ2.1 algorithm to that of a modified version having a decreasing window and normalized step size, on a ten class vowel classification problem.

Cite

Text

de Sa and Ballard. "A Note on Learning Vector Quantization." Neural Information Processing Systems, 1992.

Markdown

[de Sa and Ballard. "A Note on Learning Vector Quantization." Neural Information Processing Systems, 1992.](https://mlanthology.org/neurips/1992/desa1992neurips-note/)

BibTeX

@inproceedings{desa1992neurips-note,
  title     = {{A Note on Learning Vector Quantization}},
  author    = {de Sa, Virginia R. and Ballard, Dana H.},
  booktitle = {Neural Information Processing Systems},
  year      = {1992},
  pages     = {220-227},
  url       = {https://mlanthology.org/neurips/1992/desa1992neurips-note/}
}