Learning Decision Trees from Random Examples

Abstract

We define the rank of a decision tree and show that for any fixed r, the class of all decision trees of rank at most r on n Boolean variables is learnable from random examples in time polynomial in n and linear in 1/ɛ and log(1/δ), where ɛ is the accuracy parameter and δ is the confidence parameter. Using a suitable encoding of variables, Rivest's polynomial learnability result for decision lists can be interpreted as a special case of this result for rank 1. As another corollary, we show that decision trees on n Boolean variables of size polynomial in n are learnable from random examples in time linear in n O(logn), 1/ɛ, and log(1/δ). As a third corollary, we show that Boolean functions that have polynomial size DNF expressions for both their positive and their negative instances are learnable from random examples in time linear in n O((logn)2), 1/ɛ, and log(1/δ).

Cite

Text

Ehrenfeucht and Haussler. "Learning Decision Trees from Random Examples." Annual Conference on Computational Learning Theory, 1988. doi:10.1016/0890-5401(89)90001-1

Markdown

[Ehrenfeucht and Haussler. "Learning Decision Trees from Random Examples." Annual Conference on Computational Learning Theory, 1988.](https://mlanthology.org/colt/1988/ehrenfeucht1988colt-learning/) doi:10.1016/0890-5401(89)90001-1

BibTeX

@inproceedings{ehrenfeucht1988colt-learning,
  title     = {{Learning Decision Trees from Random Examples}},
  author    = {Ehrenfeucht, Andrzej and Haussler, David},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {1988},
  pages     = {182-194},
  doi       = {10.1016/0890-5401(89)90001-1},
  url       = {https://mlanthology.org/colt/1988/ehrenfeucht1988colt-learning/}
}