Merging Uniform Inductive Learners
Abstract
The fundamental learning model considered here is identification of recursive functions in the limit as introduced by Gold [ 8 ], but the concept is investigated on a meta-level. A set of classes of recursive functions is uniformly learnable under an inference criterion I , if there is a single learner, which synthesizes a learner for each of these classes from a corresponding description of the class. The particular question discussed here is how unions of uniformly learnable sets of such classes can still be identified uniformly. Especially unions of classes leading to strong separations of inference criteria in the uniform model are considered. The main result is that for any pair ( I , I′ ) of different inference criteria considered here there exists a fixed set of descriptions of learning problems from I , such that its union with any uniformly I -learnable collection is uniformly I′ -learnable, but no longer uniformly I -learnable.
Cite
Text
Zilles. "Merging Uniform Inductive Learners." Annual Conference on Computational Learning Theory, 2002. doi:10.1007/3-540-45435-7_14Markdown
[Zilles. "Merging Uniform Inductive Learners." Annual Conference on Computational Learning Theory, 2002.](https://mlanthology.org/colt/2002/zilles2002colt-merging/) doi:10.1007/3-540-45435-7_14BibTeX
@inproceedings{zilles2002colt-merging,
title = {{Merging Uniform Inductive Learners}},
author = {Zilles, Sandra},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2002},
pages = {201-215},
doi = {10.1007/3-540-45435-7_14},
url = {https://mlanthology.org/colt/2002/zilles2002colt-merging/}
}