Projective DNF Formulae and Their Revision
Abstract
Valiant argued that biology imposes various constraints on learnability, and, motivated by these constraints, introduced his model of projection learning [14]. Projection learning aims to learn a target concept over some large domain, in this paper 0,1^ n , by learning some of its projections to a class of smaller domains, and combining these projections. Valiant proved a general mistake bound for the resulting algorithm under certain conditions. The basic assumption underlying projection learning is that there is a family of simple projections that cover all positive instances of the target, where simple means belonging to some efficiently learnable class. The projections describing the target in this way can also be thought of as a set of experts, each specialized to classify a subset of the instances, such that whenever two experts overlap they always agree in their classification.
Cite
Text
Sloan et al. "Projective DNF Formulae and Their Revision." Annual Conference on Computational Learning Theory, 2003. doi:10.1007/978-3-540-45167-9_45Markdown
[Sloan et al. "Projective DNF Formulae and Their Revision." Annual Conference on Computational Learning Theory, 2003.](https://mlanthology.org/colt/2003/sloan2003colt-projective/) doi:10.1007/978-3-540-45167-9_45BibTeX
@inproceedings{sloan2003colt-projective,
title = {{Projective DNF Formulae and Their Revision}},
author = {Sloan, Robert H. and Szörényi, Balázs and Turán, György},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2003},
pages = {625-639},
doi = {10.1007/978-3-540-45167-9_45},
url = {https://mlanthology.org/colt/2003/sloan2003colt-projective/}
}