On Learning More Concepts

Abstract

The coverage of a learning algorithm is the number of concepts that can be learned by that algorithm from samples of a given size. This paper asks whether good learning algorithms can be designed by maximizing their coverage. The paper extends a previous upper bound on the coverage of any Boolean concept learning algorithm and describes two algorithms—Multi-Balls and Large-Ball—whose coverage approaches this upper bound. Experimental measurement of the coverage of the ID3 and FRINGE algorithms shows that their coverage is far below this bound. Further analysis of Large-Ball shows that although it learns many concepts, these do not seem to be very interesting concepts. Hence, coverage maximization alone does not appear to yield practically-useful learning algorithms. The paper concludes with a definition of coverage within a bias, which suggests a way that coverage maximization could be applied to strengthen weak preference biases.

Cite

Text

Almuallim and Dietterich. "On Learning More Concepts." International Conference on Machine Learning, 1992. doi:10.1016/B978-1-55860-247-2.50007-3

Markdown

[Almuallim and Dietterich. "On Learning More Concepts." International Conference on Machine Learning, 1992.](https://mlanthology.org/icml/1992/almuallim1992icml-learning/) doi:10.1016/B978-1-55860-247-2.50007-3

BibTeX

@inproceedings{almuallim1992icml-learning,
  title     = {{On Learning More Concepts}},
  author    = {Almuallim, Hussein and Dietterich, Thomas G.},
  booktitle = {International Conference on Machine Learning},
  year      = {1992},
  pages     = {11-19},
  doi       = {10.1016/B978-1-55860-247-2.50007-3},
  url       = {https://mlanthology.org/icml/1992/almuallim1992icml-learning/}
}