On Learning Monotone DNF Under Product Distributions
Abstract
We show that the class of monotone $ 2^0 \left( {\sqrt {\log {\mathbf{ }}n} } \right) - $ term DNF formulae can be PAC learned in polynomial time under the uniform distribution from random examples only. This is an exponential improvement over the best previous polynomial-time algorithms in this model, which could learn monotone o (log^2 n )-term DNF, and is the first efficient algorithm for monotone (log n )^?(1)-term DNF in any nontrivial model of learning from random examples. We also show that various classes of small constant-depth circuits which compute monotone functions on few input variables are PAC learnable in polynomial time under the uniform distribution. All of our results extend to learning under any constantbounded product distribution.
Cite
Text
Servedio. "On Learning Monotone DNF Under Product Distributions." Annual Conference on Computational Learning Theory, 2001. doi:10.1007/3-540-44581-1_37Markdown
[Servedio. "On Learning Monotone DNF Under Product Distributions." Annual Conference on Computational Learning Theory, 2001.](https://mlanthology.org/colt/2001/servedio2001colt-learning/) doi:10.1007/3-540-44581-1_37BibTeX
@inproceedings{servedio2001colt-learning,
title = {{On Learning Monotone DNF Under Product Distributions}},
author = {Servedio, Rocco A.},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2001},
pages = {558-573},
doi = {10.1007/3-540-44581-1_37},
url = {https://mlanthology.org/colt/2001/servedio2001colt-learning/}
}