On Learning Visual Concepts and DNF Formulae

Abstract

We consider the problem of learning DNF formulae in the mistake-bound and the PAC models. We develop a new approach, which is called polynomial explainability , that is shown to be useful for learning some new subclasses of DNF (and CNF) formulae that were not known to be learnable before. Unlike previous learnability results for DNF (and CNF) formulae, these subclasses are not limited in the number of terms or in the number of variables per term; yet, they contain the subclasses of k -DNF and k -term-DNF (and the corresponding classes of CNF) as special cases. We apply our DNF results to the problem of learning visual concepts and obtain learning algorithms for several natural subclasses of visual concepts that appear to have no natural boolean counterpart. On the other hand, we show that learning some other natural subclasses of visual concepts is as hard as learning the class of all DNF formulae. We also consider the robustness of these results under various types of noise.

Cite

Text

Kushilevitz and Roth. "On Learning Visual Concepts and DNF Formulae." Machine Learning, 1996. doi:10.1007/BF00117833

Markdown

[Kushilevitz and Roth. "On Learning Visual Concepts and DNF Formulae." Machine Learning, 1996.](https://mlanthology.org/mlj/1996/kushilevitz1996mlj-learning/) doi:10.1007/BF00117833

BibTeX

@article{kushilevitz1996mlj-learning,
  title     = {{On Learning Visual Concepts and DNF Formulae}},
  author    = {Kushilevitz, Eyal and Roth, Dan},
  journal   = {Machine Learning},
  year      = {1996},
  pages     = {65-85},
  doi       = {10.1007/BF00117833},
  volume    = {24},
  url       = {https://mlanthology.org/mlj/1996/kushilevitz1996mlj-learning/}
}