Measuring the Effect of Training Data on Deep Learning Predictions via Randomized Experiments
Abstract
We develop a new, principled algorithm for estimating the contribution of training data points to the behavior of a deep learning model, such as a specific prediction it makes. Our algorithm estimates the AME, a quantity that measures the expected (average) marginal effect of adding a data point to a subset of the training data, sampled from a given distribution. When subsets are sampled from the uniform distribution, the AME reduces to the well-known Shapley value. Our approach is inspired by causal inference and randomized experiments: we sample different subsets of the training data to train multiple submodels, and evaluate each submodel’s behavior. We then use a LASSO regression to jointly estimate the AME of each data point, based on the subset compositions. Under sparsity assumptions ($k \ll N$ datapoints have large AME), our estimator requires only $O(k\log N)$ randomized submodel trainings, improving upon the best prior Shapley value estimators.
Cite
Text
Lin et al. "Measuring the Effect of Training Data on Deep Learning Predictions via Randomized Experiments." International Conference on Machine Learning, 2022.Markdown
[Lin et al. "Measuring the Effect of Training Data on Deep Learning Predictions via Randomized Experiments." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/lin2022icml-measuring/)BibTeX
@inproceedings{lin2022icml-measuring,
title = {{Measuring the Effect of Training Data on Deep Learning Predictions via Randomized Experiments}},
author = {Lin, Jinkun and Zhang, Anqi and Lécuyer, Mathias and Li, Jinyang and Panda, Aurojit and Sen, Siddhartha},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {13468-13504},
volume = {162},
url = {https://mlanthology.org/icml/2022/lin2022icml-measuring/}
}