Language Learning Under Various Types of Constraint Combinations
Abstract
Learnability of families of recursive languages from positive data is studied in the Gold paradigm of inductive inference. A large amount of work has focused on trying to understand how language learning ability of a learner is affected when it is constrained in various ways. For example, motivated by work in inductive logic, different notions of monotonicity have been studied which variously reflect the requirement that the learner's guess must monotonically ‘improve’ with regard to the target language. Various types of combinations of constraints such as monotonicity are defined and their relationships explored. Under one version of a disjunctive combination of a set of constraints, learning is considered successful as long as on any presentation of a language at least one of the constraints in the set is satisfied. It is also shown that a conjunctive combination of certain monotonicity constraints is less powerful than the set-theoretic intersection of the classes corresponding to the individual constraints.
Cite
Text
Kapur. "Language Learning Under Various Types of Constraint Combinations." International Conference on Algorithmic Learning Theory, 1994. doi:10.1007/3-540-58520-6_77Markdown
[Kapur. "Language Learning Under Various Types of Constraint Combinations." International Conference on Algorithmic Learning Theory, 1994.](https://mlanthology.org/alt/1994/kapur1994alt-language/) doi:10.1007/3-540-58520-6_77BibTeX
@inproceedings{kapur1994alt-language,
title = {{Language Learning Under Various Types of Constraint Combinations}},
author = {Kapur, Shyam},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {1994},
pages = {365-378},
doi = {10.1007/3-540-58520-6_77},
url = {https://mlanthology.org/alt/1994/kapur1994alt-language/}
}