DNF Are Teachable in the Average Case
Abstract
We study the average number of well-chosen labeled examples that are required for a helpful teacher to uniquely specify a target function within a concept class. This “average teaching dimension” has been studied in learning theory and combinatorics and is an attractive alternative to the “worst-case” teaching dimension of Goldman and Kearns [7] which is exponential for many interesting concept classes. Recently Balbach [3] showed that the classes of 1-decision lists and 2-term DNF each have linear average teaching dimension. As our main result, we extend Balbach’s teaching result for 2-term DNF by showing that for any 1 ≤ s ≤2 $^{\Theta({\it n})}$ , the well-studied concept classes of at-most- s -term DNF and at-most- s -term monotone DNF each have average teaching dimension O ( ns ). The proofs use detailed analyses of the combinatorial structure of “most” DNF formulas and monotone DNF formulas. We also establish asymptotic separations between the worst-case and average teaching dimension for various other interesting Boolean concept classes such as juntas and sparse GF _2 polynomials.
Cite
Text
Lee et al. "DNF Are Teachable in the Average Case." Annual Conference on Computational Learning Theory, 2006. doi:10.1007/11776420_18Markdown
[Lee et al. "DNF Are Teachable in the Average Case." Annual Conference on Computational Learning Theory, 2006.](https://mlanthology.org/colt/2006/lee2006colt-dnf/) doi:10.1007/11776420_18BibTeX
@inproceedings{lee2006colt-dnf,
title = {{DNF Are Teachable in the Average Case}},
author = {Lee, Homin K. and Servedio, Rocco A. and Wan, Andrew},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2006},
pages = {214-228},
doi = {10.1007/11776420_18},
url = {https://mlanthology.org/colt/2006/lee2006colt-dnf/}
}