Efficient Data Shapley for Weighted Nearest Neighbor Algorithms
Abstract
This work aims to address an open problem in data valuation literature concerning the efficient computation of Data Shapley for weighted $K$ nearest neighbor algorithm (WKNN-Shapley). By considering the accuracy of hard-label KNN with discretized weights as the utility function, we reframe the computation of WKNN-Shapley into a counting problem and introduce a quadratic-time algorithm, presenting a notable improvement from $O(N^K)$, the best result from existing literature. We develop a deterministic approximation algorithm that further improves computational efficiency while maintaining the key fairness properties of the Shapley value. Through extensive experiments, we demonstrate WKNN-Shapley’s computational efficiency and its superior performance in discerning data quality compared to its unweighted counterpart.
Cite
Text
Wang et al. "Efficient Data Shapley for Weighted Nearest Neighbor Algorithms." Artificial Intelligence and Statistics, 2024.Markdown
[Wang et al. "Efficient Data Shapley for Weighted Nearest Neighbor Algorithms." Artificial Intelligence and Statistics, 2024.](https://mlanthology.org/aistats/2024/wang2024aistats-efficient/)BibTeX
@inproceedings{wang2024aistats-efficient,
title = {{Efficient Data Shapley for Weighted Nearest Neighbor Algorithms}},
author = {Wang, Jiachen T. and Mittal, Prateek and Jia, Ruoxi},
booktitle = {Artificial Intelligence and Statistics},
year = {2024},
pages = {2557-2565},
volume = {238},
url = {https://mlanthology.org/aistats/2024/wang2024aistats-efficient/}
}