Learning One-Variable Pattern Languages in Linear Average Time
Abstract
A new algorithm for learning one-variable pattern languages is proposed and analyzed with respect to its average-case behavior. We consider the total learning time that takes into account all operations till an algorithm has converged to a correct hypothesis. For the expectation it is shown that for almost all meaningful distributions defining how the pattern variable is replaced by a string to generate random examples of the target pattern language this algorithm converges within a constant number of rounds with a total learning time that is linear in the pattern length. Thus, the algorithm is average-case optimal in a strong sense. Though one-variable pattern languages cannot be inferred finitely, our approach can also be considered as probabilistic finite learning with high confidence.
Cite
Text
Reischuk and Zeugmann. "Learning One-Variable Pattern Languages in Linear Average Time." Annual Conference on Computational Learning Theory, 1998. doi:10.1145/279943.279984Markdown
[Reischuk and Zeugmann. "Learning One-Variable Pattern Languages in Linear Average Time." Annual Conference on Computational Learning Theory, 1998.](https://mlanthology.org/colt/1998/reischuk1998colt-learning/) doi:10.1145/279943.279984BibTeX
@inproceedings{reischuk1998colt-learning,
title = {{Learning One-Variable Pattern Languages in Linear Average Time}},
author = {Reischuk, Rüdiger and Zeugmann, Thomas},
booktitle = {Annual Conference on Computational Learning Theory},
year = {1998},
pages = {198-208},
doi = {10.1145/279943.279984},
url = {https://mlanthology.org/colt/1998/reischuk1998colt-learning/}
}