On an Unsupervised Learning Rule for Scalar Quantization Following the Maximum Entropy Principle
Abstract
A novel unsupervised learning rule, called Boundary Adaptation Rule (BAR), is introduced for scalar quantization. It is shown that the rule maximizes information-theoretic entropy and thus yields equiprobable quantizations of univariate probability density functions. It is shown by simulations that BAR outperforms other unsupervised competitive learning rules in generating equiprobable quantizations. It is also shown that our rule can do better or worse than the Lloyd I algorithm in minimizing average mean square error, depending on the input distribution. Finally, an application to adaptive non-uniform analog to digital (A/D) conversion is considered.
Cite
Text
Van Hulle and Martinez. "On an Unsupervised Learning Rule for Scalar Quantization Following the Maximum Entropy Principle." Neural Computation, 1993. doi:10.1162/NECO.1993.5.6.939Markdown
[Van Hulle and Martinez. "On an Unsupervised Learning Rule for Scalar Quantization Following the Maximum Entropy Principle." Neural Computation, 1993.](https://mlanthology.org/neco/1993/hulle1993neco-unsupervised/) doi:10.1162/NECO.1993.5.6.939BibTeX
@article{hulle1993neco-unsupervised,
title = {{On an Unsupervised Learning Rule for Scalar Quantization Following the Maximum Entropy Principle}},
author = {Van Hulle, Marc M. and Martinez, Dominique},
journal = {Neural Computation},
year = {1993},
pages = {939-953},
doi = {10.1162/NECO.1993.5.6.939},
volume = {5},
url = {https://mlanthology.org/neco/1993/hulle1993neco-unsupervised/}
}