Uniform Charakterizations of Various Kinds of Language Learning

Abstract

Learnability of families of recursive languages from positive data is studied in the Gold paradigm of inductive inference. A large amount of work has focused on trying to understand how the language learning ability of an inductive inference machine is affected when it is constrained. For example, derived from work in inductive logic, notions of monotonicity have been studied which variously reflect the requirement that the learner's guess must monotonically ‘improve’ with regard to the target language. A unique characterization theorem is obtained which uniformly characterizes all classes learnable under a number of different constraints specified via a parametric description. It is also shown how many known characterizations can be obtained by straightforward applications of this theorem. It is argued that the new parameterization scheme for specifying constraints works for a wide variety of constraints.

Cite

Text

Kapur. "Uniform Charakterizations of Various Kinds of Language Learning." International Conference on Algorithmic Learning Theory, 1993. doi:10.1007/3-540-57370-4_48

Markdown

[Kapur. "Uniform Charakterizations of Various Kinds of Language Learning." International Conference on Algorithmic Learning Theory, 1993.](https://mlanthology.org/alt/1993/kapur1993alt-uniform/) doi:10.1007/3-540-57370-4_48

BibTeX

@inproceedings{kapur1993alt-uniform,
  title     = {{Uniform Charakterizations of Various Kinds of Language Learning}},
  author    = {Kapur, Shyam},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {1993},
  pages     = {197-208},
  doi       = {10.1007/3-540-57370-4_48},
  url       = {https://mlanthology.org/alt/1993/kapur1993alt-uniform/}
}