Learning Probabilistic Automata with Variable Memory Length
Abstract
We propose and analyze a distribution learning algorithm for variable memory length Markov processes. These processes can be described by a subclass of probabilistic finite automata which we name Probabilistic Finite Suffix Automata. The learning algorithm is motivated by real applications in man-machine interaction such as hand-writing and speech recognition. Conventionally used fixed memory Markov and hidden Markov models have either severe practical or theoretical drawbacks. Though general hardness results are known for learning distributions generated by sources with similar structure, we prove that our algorithm can indeed efficiently learn distributions generated by our more restricted sources. In Particular, we show that the KL-divergence between the distribution generated by the target source and the distribution generated by our hypothesis can be made small with high confidence in polynomial time and sample complexity. We demonstrate the applicability of our algorithm by learning the structure of natural English text and using our hypothesis for the correction of corrupted text.
Cite
Text
Ron et al. "Learning Probabilistic Automata with Variable Memory Length." Annual Conference on Computational Learning Theory, 1994. doi:10.1145/180139.181006Markdown
[Ron et al. "Learning Probabilistic Automata with Variable Memory Length." Annual Conference on Computational Learning Theory, 1994.](https://mlanthology.org/colt/1994/ron1994colt-learning/) doi:10.1145/180139.181006BibTeX
@inproceedings{ron1994colt-learning,
title = {{Learning Probabilistic Automata with Variable Memory Length}},
author = {Ron, Dana and Singer, Yoram and Tishby, Naftali},
booktitle = {Annual Conference on Computational Learning Theory},
year = {1994},
pages = {35-46},
doi = {10.1145/180139.181006},
url = {https://mlanthology.org/colt/1994/ron1994colt-learning/}
}