Limitations on Inductive Learning
Abstract
This paper explores the proposition that inductive learning from examples is fundamentally limited to learning only a small fraction of the total space of possible hypotheses. We begin by defining the notion of an algorithm reliably learning a good approximation to a concept C. An empirical study of three algorithms (the classical algorithm for maximally specific conjunctive generalizations, ID3, and back-propagation for feed-forward networks of logistic units) demonstrates that each of these algorithms performs very poorly for the task of learning concepts defined over the space of Boolean feature vectors containing 3 variables. Simple counting arguments allow us to prove an upper bound on the maximum number of concepts reliably learnable from m training examples.
Cite
Text
Dietterich. "Limitations on Inductive Learning." International Conference on Machine Learning, 1989. doi:10.1016/b978-1-55860-036-2.50039-4Markdown
[Dietterich. "Limitations on Inductive Learning." International Conference on Machine Learning, 1989.](https://mlanthology.org/icml/1989/dietterich1989icml-limitations/) doi:10.1016/b978-1-55860-036-2.50039-4BibTeX
@inproceedings{dietterich1989icml-limitations,
title = {{Limitations on Inductive Learning}},
author = {Dietterich, Thomas G.},
booktitle = {International Conference on Machine Learning},
year = {1989},
pages = {124-128},
doi = {10.1016/b978-1-55860-036-2.50039-4},
url = {https://mlanthology.org/icml/1989/dietterich1989icml-limitations/}
}