Variations on U-Shaped Learning
Abstract
The paper deals with the following problem: is returning to wrong conjectures necessary to achieve full power of learning? Returning to wrong conjectures complements the paradigm of U-shaped learning [2,6,8,20,24] when a learner returns to old correct conjectures. We explore our problem for classical models of learning in the limit: TxtEx -learning – when a learner stabilizes on a correct conjecture, and TxtBc -learning – when a learner stabilizes on a sequence of grammars representing the target concept. In all cases, we show that, surprisingly, returning to wrong conjectures is sometimes necessary to achieve full power of learning. On the other hand it is not necessary to return to old “overgeneralizing” conjectures containing elements not belonging to the target language. We also consider our problem in the context of so-called vacillatory learning when a learner stabilizes to a finite number of correct grammars. In this case we show that both returning to old wrong conjectures and returning to old “overgeneralizing” conjectures is necessary for full learning power. We also show that, surprisingly, learners consistent with the input seen so far can be made decisive [2,21] – they do not have to return to any old conjectures – wrong or right.
Cite
Text
Carlucci et al. "Variations on U-Shaped Learning." Annual Conference on Computational Learning Theory, 2005. doi:10.1007/11503415_26Markdown
[Carlucci et al. "Variations on U-Shaped Learning." Annual Conference on Computational Learning Theory, 2005.](https://mlanthology.org/colt/2005/carlucci2005colt-variations/) doi:10.1007/11503415_26BibTeX
@inproceedings{carlucci2005colt-variations,
title = {{Variations on U-Shaped Learning}},
author = {Carlucci, Lorenzo and Jain, Sanjay and Kinber, Efim B. and Stephan, Frank},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2005},
pages = {382-397},
doi = {10.1007/11503415_26},
url = {https://mlanthology.org/colt/2005/carlucci2005colt-variations/}
}