Trading Off Simplicity and Coverage in Incremental Concept Learning

Abstract

We present HILLARY, an incremental learning method that addresses several of the more difficult aspects of learning from examples. Specifically, HILLARY employs ‘hill climbing’ to incrementally learn disjunctive concepts from noisy data in either a relational or at tribute-value representation. In the treatment of these aspects, we have noticed an interesting tradeoff between the simplicity of candidate concept descriptions and their coverage of previously seen instances. We discuss HILLARY's learning algorithm, tradeoff, and evaluation function, and we present empirical studies of the system's learning behavior on both natural and artificial domains. We show that HILLARY's performance deteriorates linearly with the amount of noise, independent of the memory limitations. Also, our results show that small improvements in performance are gained at the expense of large increases in the number of disjuncts demonstrating the relevance and importance of the tradeoff. We conclude with ideas for future research.

Cite

Text

Iba et al. "Trading Off Simplicity and Coverage in Incremental Concept Learning." International Conference on Machine Learning, 1988. doi:10.1016/B978-0-934613-64-4.50013-X

Markdown

[Iba et al. "Trading Off Simplicity and Coverage in Incremental Concept Learning." International Conference on Machine Learning, 1988.](https://mlanthology.org/icml/1988/iba1988icml-trading/) doi:10.1016/B978-0-934613-64-4.50013-X

BibTeX

@inproceedings{iba1988icml-trading,
  title     = {{Trading Off Simplicity and Coverage in Incremental Concept Learning}},
  author    = {Iba, Wayne and Wogulis, James and Langley, Pat},
  booktitle = {International Conference on Machine Learning},
  year      = {1988},
  pages     = {73-79},
  doi       = {10.1016/B978-0-934613-64-4.50013-X},
  url       = {https://mlanthology.org/icml/1988/iba1988icml-trading/}
}