Strongly Non-U-Shaped Learning Results by General Techniques
Abstract
In learning, a semantic or behavioral U-shape occurs when a learner rst learns, then unlearns, and, nally, relearns, some target concept (on the way to success). Within the framework of Inductive Inference, previous results have shown, for example, that such U- shapes are unnecessary for explanatory learning, but are necessary for behaviorally correct and non-trivial vacillatory learning. Herein we focus more on syntactic U-shapes. This paper introduces two general techniques and applies them especially to syntactic U- shapes in learning: one technique to show when they are necessary and one to show when they are unnecessary. The technique for the former is very general and applicable to a much wider range of learning criteria. It employs so-called self-learning classes of languages which are shown to characterize completely one criterion learning more than another. We apply these techniques to show that, for set-driven and partially set-driven learning, any kind of U-shapes are unnecessary. Furthermore, we show that U-shapes are not unnecessary in a strong way for iterative learning, contrasting an earlier result by Case and Moelius that semantic U-shapes are unnecessary for iterative learning.
Cite
Text
Case and Kötzing. "Strongly Non-U-Shaped Learning Results by General Techniques." Annual Conference on Computational Learning Theory, 2010.Markdown
[Case and Kötzing. "Strongly Non-U-Shaped Learning Results by General Techniques." Annual Conference on Computational Learning Theory, 2010.](https://mlanthology.org/colt/2010/case2010colt-strongly/)BibTeX
@inproceedings{case2010colt-strongly,
title = {{Strongly Non-U-Shaped Learning Results by General Techniques}},
author = {Case, John and Kötzing, Timo},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2010},
pages = {181-193},
url = {https://mlanthology.org/colt/2010/case2010colt-strongly/}
}