Does Imputation Matter? Benchmark for Real-Life Classification Problems.
Abstract
Incomplete data are common in practical applications. Most predictive machine learning models do not handle missing values so they require some preprocessing. Although many algorithms are used for data imputation, we do not understand the impact of the different methods on the predictive models' performance. This paper is first that systematically evaluates the empirical effectiveness of data imputation algorithms for predictive models. The main contributions are (1) the recommendation of a general method for empirical benchmarking based on real-life classification tasks and the (2) comparative analysis of different imputation methods for a collection of data sets and a collection of ML algorithms.
Cite
Text
Woźnica and Biecek. "Does Imputation Matter? Benchmark for Real-Life Classification Problems.." ICML 2020 Workshops: Artemiss, 2020.Markdown
[Woźnica and Biecek. "Does Imputation Matter? Benchmark for Real-Life Classification Problems.." ICML 2020 Workshops: Artemiss, 2020.](https://mlanthology.org/icmlw/2020/woznica2020icmlw-imputation/)BibTeX
@inproceedings{woznica2020icmlw-imputation,
title = {{Does Imputation Matter? Benchmark for Real-Life Classification Problems.}},
author = {Woźnica, Katarzyna and Biecek, Przemyslaw},
booktitle = {ICML 2020 Workshops: Artemiss},
year = {2020},
url = {https://mlanthology.org/icmlw/2020/woznica2020icmlw-imputation/}
}