Adaptive Mixtures of Probabilistic Transducers

Abstract

We describe and analyze a mixture model for supervised learning of probabilistic transducers. We devise an online learning algorithm that efficiently infers the structure and estimates the parameters of each probabilistic transducer in the mixture. Theoretical analysis and comparative simulations indicate that the learning algorithm tracks the best transducer from an arbitrarily large (possibly infinite) pool of models. We also present an application of the model for inducing a noun phrase recognizer.

Cite

Text

Singer. "Adaptive Mixtures of Probabilistic Transducers." Neural Computation, 1997. doi:10.1162/NECO.1997.9.8.1711

Markdown

[Singer. "Adaptive Mixtures of Probabilistic Transducers." Neural Computation, 1997.](https://mlanthology.org/neco/1997/singer1997neco-adaptive/) doi:10.1162/NECO.1997.9.8.1711

BibTeX

@article{singer1997neco-adaptive,
  title     = {{Adaptive Mixtures of Probabilistic Transducers}},
  author    = {Singer, Yoram},
  journal   = {Neural Computation},
  year      = {1997},
  pages     = {1711-1733},
  doi       = {10.1162/NECO.1997.9.8.1711},
  volume    = {9},
  url       = {https://mlanthology.org/neco/1997/singer1997neco-adaptive/}
}