U-Shaped, Iterative, and Iterative-with-Counter Learning

Abstract

This paper solves an important problem left open in the literature by showing that U-shapes are un necessary in iterative learning from positive data. A U-shape occurs when a learner first learns , then unlearns , and, finally, relearns , some target concept. Iterative learning is a Gold-style learning model in which each of a learner’s output conjectures depends only upon the learner’s most recent conjecture and input element. Previous results had shown, for example, that U-shapes are un necessary for explanatory learning, but are necessary for behaviorally correct learning. Work on the aforementioned problem led to the consideration of an iterative-like learning model, in which each of a learner’s conjectures may, in addition , depend upon the number of elements so far presented to the learner. Learners in this new model are strictly more powerful than traditional iterative learners, yet not as powerful as full explanatory learners. Can any class of languages learnable in this new model be learned without U-shapes? For now, this problem is left open.

Cite

Text

Case and Moelius. "U-Shaped, Iterative, and Iterative-with-Counter Learning." Machine Learning, 2008. doi:10.1007/S10994-008-5047-9

Markdown

[Case and Moelius. "U-Shaped, Iterative, and Iterative-with-Counter Learning." Machine Learning, 2008.](https://mlanthology.org/mlj/2008/case2008mlj-ushaped/) doi:10.1007/S10994-008-5047-9

BibTeX

@article{case2008mlj-ushaped,
  title     = {{U-Shaped, Iterative, and Iterative-with-Counter Learning}},
  author    = {Case, John and Moelius, Samuel E.},
  journal   = {Machine Learning},
  year      = {2008},
  pages     = {63-88},
  doi       = {10.1007/S10994-008-5047-9},
  volume    = {72},
  url       = {https://mlanthology.org/mlj/2008/case2008mlj-ushaped/}
}