Parallelism Increases Iterative Learning Power
Abstract
Iterative learning ( $\textbf{It}$ -learning) is a Gold-style learning model in which each of a learner’s output conjectures may depend only upon the learner’s current conjecture and the current input element. Two extensions of the $\textbf{It}$ -learning model are considered, each of which involves parallelism. The first is to run, in parallel, distinct instantiations of a single learner on each input element. The second is to run, in parallel, n individual learners incorporating the first extension , and to allow the n learners to communicate their results. In most contexts, parallelism is only a means of improving efficiency. However, as shown herein, learners incorporating the first extension are more powerful than $\textbf{It}$ -learners, and, collective learners resulting from the second extension increase in learning power as n increases. Attention is paid to how one would actually implement a learner incorporating each extension. Parallelism is the underlying mechanism employed.
Cite
Text
Case and Moelius. "Parallelism Increases Iterative Learning Power." International Conference on Algorithmic Learning Theory, 2007. doi:10.1007/978-3-540-75225-7_8Markdown
[Case and Moelius. "Parallelism Increases Iterative Learning Power." International Conference on Algorithmic Learning Theory, 2007.](https://mlanthology.org/alt/2007/case2007alt-parallelism/) doi:10.1007/978-3-540-75225-7_8BibTeX
@inproceedings{case2007alt-parallelism,
title = {{Parallelism Increases Iterative Learning Power}},
author = {Case, John and Moelius, Samuel E.},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2007},
pages = {49-63},
doi = {10.1007/978-3-540-75225-7_8},
url = {https://mlanthology.org/alt/2007/case2007alt-parallelism/}
}