Improving Robustness of Random Forest Under Label Noise

Abstract

Random forest is a well-known and widely-used machine learning model. In many applications where the training data arise from real-world sources, there may be labeling errors in the data. In spite of its superior performance, the basic model of random forest dose not consider potential label noise in learning, and thus its performance can suffer significantly in the presence of label noise. In order to solve this problem, we present a new variation of random forest - a novel learning approach that leads to an improved noise robust random forest (NRRF) model. We incorporate the noise information by introducing a global multi-class noise tolerant loss function into the training of the classic random forest model. This new loss function was found to significantly boost the performance of random forest. We evaluated the proposed NRRF by extensive experiments of classification tasks on standard machine learning/computer vision datasets like MNIST, letter and Cifar10. The proposed NRRF produced very promising results under a wide range of noise settings.

Cite

Text

Zhou et al. "Improving Robustness of Random Forest Under Label Noise." IEEE/CVF Winter Conference on Applications of Computer Vision, 2019. doi:10.1109/WACV.2019.00106

Markdown

[Zhou et al. "Improving Robustness of Random Forest Under Label Noise." IEEE/CVF Winter Conference on Applications of Computer Vision, 2019.](https://mlanthology.org/wacv/2019/zhou2019wacv-improving/) doi:10.1109/WACV.2019.00106

BibTeX

@inproceedings{zhou2019wacv-improving,
  title     = {{Improving Robustness of Random Forest Under Label Noise}},
  author    = {Zhou, Xu and Ding, Pak Lun Kevin and Li, Baoxin},
  booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
  year      = {2019},
  pages     = {950-958},
  doi       = {10.1109/WACV.2019.00106},
  url       = {https://mlanthology.org/wacv/2019/zhou2019wacv-improving/}
}