On the Limits of Proper Learnability of Subclasses of DNF Formulas
Abstract
Bshouty, Goldman, Hancock and Matar have shown that up to \(\sqrt {\log n} \) term DNF formulas can be properly learned in the exact model with equivalence and membership queries. Given standard complexity-theoretical assumptions, we show that this positive result for proper learning cannot be significantly improved in the exact model or the PAC model extended to allow membership queries. Our negative results are derived from two general techniques for proving such results in the exact model and the extended PAC model. As a further application of these techniques, we consider read-thrice DNF formulas. Here we improve on Aizenstein, Hellerstein, and Pitt's negative result for proper learning in the exact model in two ways. First, we show that their assumption of NP ≠ co-NP can be replaced with the weaker assumption of P ≠ NP. Second, we show that read-thrice DNF formulas are not properly learnable in the extended PAC model, assuming RP ≠ NP.
Cite
Text
Pillaipakkamnatt and Raghavan. "On the Limits of Proper Learnability of Subclasses of DNF Formulas." Machine Learning, 1996. doi:10.1023/A:1026455409889Markdown
[Pillaipakkamnatt and Raghavan. "On the Limits of Proper Learnability of Subclasses of DNF Formulas." Machine Learning, 1996.](https://mlanthology.org/mlj/1996/pillaipakkamnatt1996mlj-limits/) doi:10.1023/A:1026455409889BibTeX
@article{pillaipakkamnatt1996mlj-limits,
title = {{On the Limits of Proper Learnability of Subclasses of DNF Formulas}},
author = {Pillaipakkamnatt, Krishnan and Raghavan, Vijay},
journal = {Machine Learning},
year = {1996},
pages = {237-263},
doi = {10.1023/A:1026455409889},
volume = {25},
url = {https://mlanthology.org/mlj/1996/pillaipakkamnatt1996mlj-limits/}
}