Monotonic Language Learning
Abstract
Learnability of families of recursive languages from positive data is studied in the Gold paradigm of inductive inference, where the learner obeys certain constraints motivated by work in inductive reasoning. Previously, various notions of monotonicity have been defined in the context of language learning. These constraints require that the learner's guess monotonically ‘improves’ with regard to the target language. In this paper, the ideas from inductive reasoning are instantiated in alternative ways. Links are established between the various new constraints both among themselves as well as with other well-known constraints, such as conservativeness. Exactly learnable families are characterized for prudent learners which obey various combinations of these constraints. Applications of these characterizations are also shown.
Cite
Text
Kapur. "Monotonic Language Learning." International Conference on Algorithmic Learning Theory, 1992. doi:10.1007/3-540-57369-0_35Markdown
[Kapur. "Monotonic Language Learning." International Conference on Algorithmic Learning Theory, 1992.](https://mlanthology.org/alt/1992/kapur1992alt-monotonic/) doi:10.1007/3-540-57369-0_35BibTeX
@inproceedings{kapur1992alt-monotonic,
title = {{Monotonic Language Learning}},
author = {Kapur, Shyam},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {1992},
pages = {147-158},
doi = {10.1007/3-540-57369-0_35},
url = {https://mlanthology.org/alt/1992/kapur1992alt-monotonic/}
}