Learning Functions of K Terms

Abstract

June 18, 1990 Abstract We consider learning in the distribution-free model a generalization of k-term DNF formulas in which the target concept may be a more general function of k monomials. Let Ck be the class of all concepts of the form f(T 1,…,Tk ) where f is any 0, 1-valued function on k inputs and T 1,…,Tk are monomials. We present an algorithm that learns the class Ck by the more expressive hypothesis class of general DNF. We also show that for any fixed symmetric function f on k inputs (f depends only on the number of inputs that are 1), learning the class Ck, f of concepts of the form f(T 1,…,Tk ) by the same class Ck, f is NP-hard, except for f ∈ Λ, ┐Λ, T, F for which learning is easy. The algorithm and NP-hardness results are based upon considering the concept class in which the function f is the XOR or parity function—that is, the target concept is some XOR of k terms—and then extending the techniques used to more general k-term functions. When applied to k-term DNF formulas, the learning algorithm presented produces a different hypothesis representation than the standard one. Instead of learning a k-term DNF by a k-CNF of O(nk ) clauses of size k, it produces a DNF of O(nk −1) terms of size O(n).

Cite

Text

Blum and Singh. "Learning Functions of K Terms." Annual Conference on Computational Learning Theory, 1990. doi:10.1016/B978-1-55860-146-8.50014-X

Markdown

[Blum and Singh. "Learning Functions of K Terms." Annual Conference on Computational Learning Theory, 1990.](https://mlanthology.org/colt/1990/blum1990colt-learning/) doi:10.1016/B978-1-55860-146-8.50014-X

BibTeX

@inproceedings{blum1990colt-learning,
  title     = {{Learning Functions of K Terms}},
  author    = {Blum, Avrim and Singh, Mona},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {1990},
  pages     = {144-153},
  doi       = {10.1016/B978-1-55860-146-8.50014-X},
  url       = {https://mlanthology.org/colt/1990/blum1990colt-learning/}
}