Learning from a Population of Hypotheses
Abstract
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.
Cite
Text
Kearns and Seung. "Learning from a Population of Hypotheses." Machine Learning, 1995. doi:10.1007/BF00993412Markdown
[Kearns and Seung. "Learning from a Population of Hypotheses." Machine Learning, 1995.](https://mlanthology.org/mlj/1995/kearns1995mlj-learning/) doi:10.1007/BF00993412BibTeX
@article{kearns1995mlj-learning,
title = {{Learning from a Population of Hypotheses}},
author = {Kearns, Michael J. and Seung, H. Sebastian},
journal = {Machine Learning},
year = {1995},
pages = {255-276},
doi = {10.1007/BF00993412},
volume = {18},
url = {https://mlanthology.org/mlj/1995/kearns1995mlj-learning/}
}