One-Stage Tree: End-to-End Tree Builder and Pruner

Abstract

Decision trees have favorable properties, including interpretability, high computational efficiency, and the ability to learn from little training data. Learning a decision tree is known to be NP-complete. The researchers have proposed many greedy algorithms such as CART to learn approximate solutions. Inspired by the current popular neural networks, soft trees that support end-to-end training with back-propagation have attracted more and more attention. However, existing soft trees either lose the interpretability due to the continuous relaxation or employ the two-stage method of end-to-end building and then pruning. In this paper, we propose One-Stage Tree to build and prune the decision tree jointly through a bilevel optimization problem. Moreover, we leverage the reparameterization trick and proximal iterations to keep the tree discrete during end-to-end training. As a result, One-Stage Tree reduces the performance gap between training and testing and maintains the advantage of interpretability. Extensive experiments demonstrate that the proposed One-Stage Tree outperforms CART and the existing soft trees on classification and regression tasks.

Cite

Text

Xu et al. "One-Stage Tree: End-to-End Tree Builder and Pruner." Machine Learning, 2022. doi:10.1007/S10994-021-06094-4

Markdown

[Xu et al. "One-Stage Tree: End-to-End Tree Builder and Pruner." Machine Learning, 2022.](https://mlanthology.org/mlj/2022/xu2022mlj-onestage/) doi:10.1007/S10994-021-06094-4

BibTeX

@article{xu2022mlj-onestage,
  title     = {{One-Stage Tree: End-to-End Tree Builder and Pruner}},
  author    = {Xu, Zhuoer and Zhu, Guanghui and Yuan, Chunfeng and Huang, Yihua},
  journal   = {Machine Learning},
  year      = {2022},
  pages     = {1959-1985},
  doi       = {10.1007/S10994-021-06094-4},
  volume    = {111},
  url       = {https://mlanthology.org/mlj/2022/xu2022mlj-onestage/}
}