Training Data Attribution via Approximate Unrolling

Abstract

Many training data attribution (TDA) methods aim to estimate how a model's behavior would change if one or more data points were removed from the training set. Methods based on implicit differentiation, such as influence functions, can be made computationally efficient, but fail to account for underspecification, the implicit bias of the optimization algorithm, or multi-stage training pipelines. By contrast, methods based on unrolling address these issues but face scalability challenges. In this work, we connect the implicit-differentiation-based and unrolling-based approaches and combine their benefits by introducing Source, an approximate unrolling-based TDA method that is computed using an influence-function-like formula. While being computationally efficient compared to unrolling-based approaches, Source is suitable in cases where implicit-differentiation-based approaches struggle, such as in non-converged models and multi-stage training pipelines. Empirically, Source outperforms existing TDA techniques in counterfactual prediction, especially in settings where implicit-differentiation-based approaches fall short.

Cite

Text

Bae et al. "Training Data Attribution via Approximate Unrolling." Neural Information Processing Systems, 2024. doi:10.52202/079017-2129

Markdown

[Bae et al. "Training Data Attribution via Approximate Unrolling." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/bae2024neurips-training/) doi:10.52202/079017-2129

BibTeX

@inproceedings{bae2024neurips-training,
  title     = {{Training Data Attribution via Approximate Unrolling}},
  author    = {Bae, Juhan and Lin, Wu and Lorraine, Jonathan and Grosse, Roger},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2129},
  url       = {https://mlanthology.org/neurips/2024/bae2024neurips-training/}
}