DNF Are Teachable in the Average Case
Abstract
We study the average number of well-chosen labeled examples that are required for a helpful teacher to uniquely specify a target function within a concept class. This “average teaching dimension” has been studied in learning theory and combinatorics and is an attractive alternative to the “worst-case” teaching dimension of Goldman and Kearns which is exponential for many interesting concept classes. Recently Balbach showed that the classes of 1-decision lists and 2-term DNF each have linear average teaching dimension. As our main result, we extend Balbach’s teaching result for 2-term DNF by showing that for any 1≤ s ≤2^ Θ ( n ), the well-studied concept classes of at-most- s -term DNF and at-most- s -term monotone DNF each have average teaching dimension O ( ns ). The proofs use detailed analyses of the combinatorial structure of “most” DNF formulas and monotone DNF formulas. We also establish asymptotic separations between the worst-case and average teaching dimension for various other interesting Boolean concept classes such as juntas and sparse GF _2 polynomials.
Cite
Text
Lee et al. "DNF Are Teachable in the Average Case." Machine Learning, 2007. doi:10.1007/S10994-007-5007-9Markdown
[Lee et al. "DNF Are Teachable in the Average Case." Machine Learning, 2007.](https://mlanthology.org/mlj/2007/lee2007mlj-dnf/) doi:10.1007/S10994-007-5007-9BibTeX
@article{lee2007mlj-dnf,
title = {{DNF Are Teachable in the Average Case}},
author = {Lee, Homin K. and Servedio, Rocco A. and Wan, Andrew},
journal = {Machine Learning},
year = {2007},
pages = {79-96},
doi = {10.1007/S10994-007-5007-9},
volume = {69},
url = {https://mlanthology.org/mlj/2007/lee2007mlj-dnf/}
}