Learning Regular Languages from Simple Positive Examples

Abstract

Learning from positive data constitutes an important topic in Grammatical Inference since it is believed that the acquisition of grammar by children only needs syntactically correct (i.e. positive) instances. However, classical learning models provide no way to avoid the problem of overgeneralization. In order to overcome this problem, we use here a learning model from simple examples, where the notion of simplicity is defined with the help of Kolmogorov complexity. We show that a general and natural heuristic which allows learning from simple positive examples can be developed in this model. Our main result is that the class of regular languages is probably exactly learnable from simple positive examples .

Cite

Text

Denis. "Learning Regular Languages from Simple Positive Examples." Machine Learning, 2001. doi:10.1023/A:1010826628977

Markdown

[Denis. "Learning Regular Languages from Simple Positive Examples." Machine Learning, 2001.](https://mlanthology.org/mlj/2001/denis2001mlj-learning/) doi:10.1023/A:1010826628977

BibTeX

@article{denis2001mlj-learning,
  title     = {{Learning Regular Languages from Simple Positive Examples}},
  author    = {Denis, François},
  journal   = {Machine Learning},
  year      = {2001},
  pages     = {37-66},
  doi       = {10.1023/A:1010826628977},
  volume    = {44},
  url       = {https://mlanthology.org/mlj/2001/denis2001mlj-learning/}
}