Model Selection in Omnivariate Decision Trees
Abstract
We propose an omnivariate decision tree architecture which contains univariate, multivariate linear or nonlinear nodes, matching the complexity of the node to the complexity of the data reaching that node. We compare the use of different model selection techniques including AIC, BIC, and CV to choose between the three types of nodes on standard datasets from the UCI repository and see that such omnivariate trees with a small percentage of multivariate nodes close to the root generalize better than pure trees with the same type of node everywhere. CV produces simpler trees than AIC and BIC without sacrificing from expected error. The only disadvantage of CV is its longer training time.
Cite
Text
Yildiz and Alpaydin. "Model Selection in Omnivariate Decision Trees." European Conference on Machine Learning, 2005. doi:10.1007/11564096_45Markdown
[Yildiz and Alpaydin. "Model Selection in Omnivariate Decision Trees." European Conference on Machine Learning, 2005.](https://mlanthology.org/ecmlpkdd/2005/yildiz2005ecml-model/) doi:10.1007/11564096_45BibTeX
@inproceedings{yildiz2005ecml-model,
title = {{Model Selection in Omnivariate Decision Trees}},
author = {Yildiz, Olcay Taner and Alpaydin, Ethem},
booktitle = {European Conference on Machine Learning},
year = {2005},
pages = {473-484},
doi = {10.1007/11564096_45},
url = {https://mlanthology.org/ecmlpkdd/2005/yildiz2005ecml-model/}
}