Global Refinement of Random Forest
Abstract
Random forest is well known as one of the best learning methods. In spite of its great success, it also has certain drawbacks: the heuristic learning rule does not effectively minimize the global training loss; the model size is usually too large for many real applications. To address the issues, we propose two techniques, global refinement and global pruning, to improve a pre-trained random forest. The proposed global refinement jointly relearns the leaf nodes of all trees under a global objective function so that the complementary information between multiple trees is well exploited. In this way, the fitting power of the forest is significantly enhanced. The global pruning is developed to reduce the model size as well as the over-fitting risk. The refined model has better performance and smaller storage cost, as verified in extensive experiments.
Cite
Text
Ren et al. "Global Refinement of Random Forest." Conference on Computer Vision and Pattern Recognition, 2015. doi:10.1109/CVPR.2015.7298672Markdown
[Ren et al. "Global Refinement of Random Forest." Conference on Computer Vision and Pattern Recognition, 2015.](https://mlanthology.org/cvpr/2015/ren2015cvpr-global/) doi:10.1109/CVPR.2015.7298672BibTeX
@inproceedings{ren2015cvpr-global,
title = {{Global Refinement of Random Forest}},
author = {Ren, Shaoqing and Cao, Xudong and Wei, Yichen and Sun, Jian},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2015},
doi = {10.1109/CVPR.2015.7298672},
url = {https://mlanthology.org/cvpr/2015/ren2015cvpr-global/}
}