Decision Tree Grafting from the All Tests but One Partition
Abstract
Decision tree grafting adds nodes to an existing decision tree with the objective of reducing prediction error. A new grafting algorithm is presented that considers one set of training data only for each leaf of the initial decision tree, the set of cases that fail at most one test on the path to the leaf. This new technique is demonstrated to retain the error reduction power of the original grafting algorithm while dramatically reducing compute time and the complexity of the inferred tree. Bias/variance analyses reveal that the original grafting technique operated primarily by variance reduction while the new technique reduces both bias and variance. 1 Introduction Decision committee techniques, notably AdaBoost [ Freund & Schapire, 1996 ] and bagging [ Breiman, 1996 ] have demonstrated spectacular success at reducing decision tree error across a wide variety of learning tasks [ Quinlan, 1996; Bauer & Kohavi, in press ] . These techniques apply a base learning algorithm...
Cite
Text
Webb. "Decision Tree Grafting from the All Tests but One Partition." International Joint Conference on Artificial Intelligence, 1999.Markdown
[Webb. "Decision Tree Grafting from the All Tests but One Partition." International Joint Conference on Artificial Intelligence, 1999.](https://mlanthology.org/ijcai/1999/webb1999ijcai-decision/)BibTeX
@inproceedings{webb1999ijcai-decision,
title = {{Decision Tree Grafting from the All Tests but One Partition}},
author = {Webb, Geoffrey I.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {1999},
pages = {702-707},
url = {https://mlanthology.org/ijcai/1999/webb1999ijcai-decision/}
}