On the Limits of Proper Learnability of Subclasses of DNF Formulas

Abstract

Bshouty, Goldman, Hancock and Matar have shown that up to log n-term DNF formulas can be properly learned in the exact model with equivalence and membership queries. Given standard complexity-theoretical assumptions, we show that this positive result for proper learning cannot be significantly improved in the exact model or the PAC model extended to allow membership queries. Our negative results are derived from two general techniques for proving such results in the exact model and the extended PAC model. As a further application of these techniques, we consider read-thrice DNF formulas. Here we improve on Aizenstein, Hellerstein, and Pitt's negative result for proper learning in the exact model in two ways. First, we show that their assumption of NP ≠ co-NP can be replaced with the weaker assumption of P ≠ NP. Second, we show that read-thrice DNF formulas are not properly learnable in the extended PAC model, assuming RP ≠ NP.

Cite

Text

Pillaipakkamnatt and Raghavan. "On the Limits of Proper Learnability of Subclasses of DNF Formulas." Annual Conference on Computational Learning Theory, 1994. doi:10.1145/180139.181063

Markdown

[Pillaipakkamnatt and Raghavan. "On the Limits of Proper Learnability of Subclasses of DNF Formulas." Annual Conference on Computational Learning Theory, 1994.](https://mlanthology.org/colt/1994/pillaipakkamnatt1994colt-limits/) doi:10.1145/180139.181063

BibTeX

@inproceedings{pillaipakkamnatt1994colt-limits,
  title     = {{On the Limits of Proper Learnability of Subclasses of DNF Formulas}},
  author    = {Pillaipakkamnatt, Krishnan and Raghavan, Vijay},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {1994},
  pages     = {118-129},
  doi       = {10.1145/180139.181063},
  url       = {https://mlanthology.org/colt/1994/pillaipakkamnatt1994colt-limits/}
}