Learning Talagrand DNF Formulas
Abstract
Consider the following version of Talagrand’s probabilistic construction of a monotone function f: 0, 1 n → 0, 1. Let f be an n-term monotone DNF formula where each term is selected independently and uniformly at random (with replacement) from the set of all n Ω(log log n) possible terms of length log(n) over the first log 2 (n) variables. Let us call such a DNF formula a Talagrand DNF formula. This is a scaled-down version of the construction by Talagrand (1996) which is defined over all n variables and has subexponentially many terms. I am interested in the following problem: Does there exist a polynomial-time algorithm for learning the class of Talagrand DNF formulas over the uniform distribution on 0, 1 n given uniform random examples? Ideally, I am looking for algorithms that will learn the class of all possible Talagrand DNF formulas in the worst-case. However, an average-case learning algorithm that succeeds with high probability over the choice of the Talgrand DNF formula as described above would be of significant interest as well. Motivation: This problem is of course a special case of the corresponding learning problem for general polynomial-size DNF formulas. The problem of learning polynomial-size DNF formulas without queries has been open for almost twenty years (Valiant, 1984) and there has been no significant progress on the question until the last couple years.
Cite
Text
Lee. "Learning Talagrand DNF Formulas." Annual Conference on Computational Learning Theory, 2010.Markdown
[Lee. "Learning Talagrand DNF Formulas." Annual Conference on Computational Learning Theory, 2010.](https://mlanthology.org/colt/2010/lee2010colt-learning/)BibTeX
@inproceedings{lee2010colt-learning,
title = {{Learning Talagrand DNF Formulas}},
author = {Lee, Homin K.},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2010},
pages = {310-311},
url = {https://mlanthology.org/colt/2010/lee2010colt-learning/}
}