Learning Intermediate Concepts
Abstract
In most concept learning problems considered so far by the learning theory community, the instances are labeled by a single unknown target. However, in some situations, although the target concept may be quite complex when expressed as a function of the attribute values of the instance, it may have a simple relationship with some intermediate (yet to be learned) concepts. In such cases, it may be advantageous to learn both these intermediate concepts and the target concept in parallel, and use the intermediate concepts to enhance our approximation of the target concept. In this paper, we consider the problem of learning multiple interrelated concepts simultaneously. To avoid stability problem, we assume that the dependency relations among the concepts are not cyclical and hence can be expressed using a directed acyclic graph (not known to the learner). We investigate this learning problem in various popular theoretical models: mistake bound model, exact learningmo del and probably approximately correct (PAC) model.
Cite
Text
Kwek. "Learning Intermediate Concepts." International Conference on Algorithmic Learning Theory, 2001. doi:10.1007/3-540-45583-3_13Markdown
[Kwek. "Learning Intermediate Concepts." International Conference on Algorithmic Learning Theory, 2001.](https://mlanthology.org/alt/2001/kwek2001alt-learning/) doi:10.1007/3-540-45583-3_13BibTeX
@inproceedings{kwek2001alt-learning,
title = {{Learning Intermediate Concepts}},
author = {Kwek, Stephen},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2001},
pages = {151-166},
doi = {10.1007/3-540-45583-3_13},
url = {https://mlanthology.org/alt/2001/kwek2001alt-learning/}
}