Sparse, Efficient and Explainable Data Attribution with DualXDA
Abstract
Data Attribution (DA) is an emerging approach in the field of eXplainable Artificial Intelligence (XAI), aiming to identify influential training datapoints which determine model outputs. It seeks to provide transparency about the model and individual predictions, e.g. for model debugging, identifying data-related causes of suboptimal performance. However, existing DA approaches suffer from prohibitively high computational costs and memory demands when applied to even medium-scale datasets and models, forcing practitioners to resort to approximations that may fail to capture the true inference process of the underlying model. Additionally, current attribution methods exhibit low sparsity, resulting in non-negligible attribution scores across a high number of training examples, hindering the discovery of decisive patterns in the data. In this work, we introduce DualXDA, a framework for sparse, efficient and explainable DA, comprised of two interlinked approaches, Dual Data Attribution (DualDA) and eXplainable Data Attribution (XDA): With DualDA, we propose a novel approach for efficient and effective DA, leveraging Support Vector Machine theory to provide fast and naturally sparse data attributions for AI predictions. In extensive quantitative analyses, we demonstrate that DualDA achieves high attribution quality, excels at solving a series of evaluated downstream tasks, while at the same time improving explanation time by a factor of up to 4,100,000 x compared to the original Influence Functions method, and up to 11,000 x compared to the method's most efficient approximation from literature to date. We further introduce XDA, a method for enhancing Data Attribution with capabilities from feature attribution methods to explain why training samples are relevant for the prediction of a test sample in terms of impactful features, which we showcase and verify qualitatively in detail. Taken together, our contributions in DualXDA ultimately point towards a future of eXplainable AI applied at unprecedented scale, enabling transparent, efficient and novel analysis of even the largest neural architectures -- such as Large Language Models -- and fostering a new generation of interpretable and accountable AI systems. The implementation of our methods, as well as the full experimental protocol, is available on github.
Cite
Text
Yolcu et al. "Sparse, Efficient and Explainable Data Attribution with DualXDA." Transactions on Machine Learning Research, 2025.Markdown
[Yolcu et al. "Sparse, Efficient and Explainable Data Attribution with DualXDA." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/yolcu2025tmlr-sparse/)BibTeX
@article{yolcu2025tmlr-sparse,
title = {{Sparse, Efficient and Explainable Data Attribution with DualXDA}},
author = {Yolcu, Galip Ümit and Weckbecker, Moritz and Wiegand, Thomas and Samek, Wojciech and Lapuschkin, Sebastian},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/yolcu2025tmlr-sparse/}
}