Why Skewing Works: Learning Difficult Boolean Functions with Greedy Tree Learners
Abstract
We analyze skewing, an approach that has been empirically observed to enable greedy decision tree learners to learn "difficult" Boolean functions, such as parity, in the presence of irrelevant variables. We prove tha, in an idealized setting, for any function and choice of skew parameters, skewing finds relevant variables with probability 1. We present experiments exploring how different parameter choices affect the success of skewing in empirical settings. Finally, we analyze a variant of skewing called Sequential Skewing.
Cite
Text
Rosell et al. "Why Skewing Works: Learning Difficult Boolean Functions with Greedy Tree Learners." International Conference on Machine Learning, 2005. doi:10.1145/1102351.1102443Markdown
[Rosell et al. "Why Skewing Works: Learning Difficult Boolean Functions with Greedy Tree Learners." International Conference on Machine Learning, 2005.](https://mlanthology.org/icml/2005/rosell2005icml-skewing/) doi:10.1145/1102351.1102443BibTeX
@inproceedings{rosell2005icml-skewing,
title = {{Why Skewing Works: Learning Difficult Boolean Functions with Greedy Tree Learners}},
author = {Rosell, Bernard and Hellerstein, Lisa and Ray, Soumya and Page, David},
booktitle = {International Conference on Machine Learning},
year = {2005},
pages = {728-735},
doi = {10.1145/1102351.1102443},
url = {https://mlanthology.org/icml/2005/rosell2005icml-skewing/}
}