Learning Correction Grammars

Abstract

We investigate a new paradigm in the context of learning in the limit: learning correction grammars for classes of r.e.  languages. Knowing a language may feature a representation of the target language in terms of two sets of rules (two grammars). The second grammar is used to make corrections to the first grammar. Such a pair of grammars can be seen as a single description of (or grammar for) the language. We call such grammars correction grammars . Correction grammars capture the observable fact that people do correct their linguistic utterances during their usual linguistic activities. We show that learning correction grammars for classes of r.e. languages in the -model (i.e., converging to a single correct correction grammar in the limit) is sometimes more powerful than learning ordinary grammars even in the -model (where the learner is allowed to converge to infinitely many syntactically distinct but correct conjectures in the limit). For each n  ≥ 0, there is a similar learning advantage, where we compare learning correction grammars that make n  + 1 corrections to those that make n corrections. The concept of a correction grammar can be extended into the constructive transfinite, using the idea of counting-down from notations for transfinite constructive ordinals. For u a notation in Kleene’s general system ( O , < _ o ) of ordinal notations, we introduce the concept of an u -correction grammar, where u is used to bound the number of corrections that the grammar is allowed to make. We prove a general hierarchy result: if u and v are notations for constructive ordinals such that u  < _ o v , then there are classes of r.e. languages that can be -learned by conjecturing v -correction grammars but not by conjecturing u -correction grammars. Surprisingly, we show that — above “ ω -many” corrections — it is not possible to strengthen the hierarchy: -learning u -correction grammars of classes of r.e. languages, where u is a notation in O for any ordinal, can be simulated by -learning w -correction grammars, where w is any notation for the smallest infinite ordinal ω .

Cite

Text

Carlucci et al. "Learning Correction Grammars." Annual Conference on Computational Learning Theory, 2007. doi:10.1007/978-3-540-72927-3_16

Markdown

[Carlucci et al. "Learning Correction Grammars." Annual Conference on Computational Learning Theory, 2007.](https://mlanthology.org/colt/2007/carlucci2007colt-learning/) doi:10.1007/978-3-540-72927-3_16

BibTeX

@inproceedings{carlucci2007colt-learning,
  title     = {{Learning Correction Grammars}},
  author    = {Carlucci, Lorenzo and Case, John and Jain, Sanjay},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2007},
  pages     = {203-217},
  doi       = {10.1007/978-3-540-72927-3_16},
  url       = {https://mlanthology.org/colt/2007/carlucci2007colt-learning/}
}