Maximum Likelihood Competitive Learning

Abstract

One popular class of unsupervised algorithms are competitive algo(cid:173) rithms. In the traditional view of competition, only one competitor, the winner, adapts for any given case. I propose to view compet(cid:173) itive adaptation as attempting to fit a blend of simple probability generators (such as gaussians) to a set of data-points. The maxi(cid:173) mum likelihood fit of a model of this type suggests a "softer" form of competition, in which all competitors adapt in proportion to the relative probability that the input came from each competitor. I investigate one application of the soft competitive model, place(cid:173) ment of radial basis function centers for function interpolation, and show that the soft model can give better performance with little additional computational cost.

Cite

Text

Nowlan. "Maximum Likelihood Competitive Learning." Neural Information Processing Systems, 1989.

Markdown

[Nowlan. "Maximum Likelihood Competitive Learning." Neural Information Processing Systems, 1989.](https://mlanthology.org/neurips/1989/nowlan1989neurips-maximum/)

BibTeX

@inproceedings{nowlan1989neurips-maximum,
  title     = {{Maximum Likelihood Competitive Learning}},
  author    = {Nowlan, Steven J.},
  booktitle = {Neural Information Processing Systems},
  year      = {1989},
  pages     = {574-582},
  url       = {https://mlanthology.org/neurips/1989/nowlan1989neurips-maximum/}
}