Mislabeled Examples Detection Viewed as Probing Machine Learning Models: Concepts, Survey and Extensive Benchmark
Abstract
Mislabeled examples are ubiquitous in real-world machine learning datasets, advocating the development of techniques for automatic detection. We show that most mislabeled detection methods can be viewed as probing trained machine learning models using a few core principles. We formalize a modular framework that encompasses these methods, parameterized by only 4 building blocks, as well as a Python library that demonstrates that these principles can actually be implemented. The focus is on classifier-agnostic concepts, with an emphasis on adapting methods developed for deep learning models to non-deep classifiers for tabular data. We benchmark existing methods on (artificial) Completely At Random (NCAR) as well as (realistic) Not At Random (NNAR) labeling noise from a variety of tasks with imperfect labeling rules. This benchmark provides new insights as well as limitations of existing methods in this setup.
Cite
Text
George et al. "Mislabeled Examples Detection Viewed as Probing Machine Learning Models: Concepts, Survey and Extensive Benchmark." Transactions on Machine Learning Research, 2024.Markdown
[George et al. "Mislabeled Examples Detection Viewed as Probing Machine Learning Models: Concepts, Survey and Extensive Benchmark." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/george2024tmlr-mislabeled/)BibTeX
@article{george2024tmlr-mislabeled,
title = {{Mislabeled Examples Detection Viewed as Probing Machine Learning Models: Concepts, Survey and Extensive Benchmark}},
author = {George, Thomas and Nodet, Pierre and Bondu, Alexis and Lemaire, Vincent},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/george2024tmlr-mislabeled/}
}