Influence-Based Attributions Can Be Manipulated
Abstract
Influence Functions are a standard tool for attributing predictions to training data in a principled manner and are widely used in applications such as data valuation and fairness. In this work, we present realistic incentives to manipulate influence-based attributions and investigate whether these attributions can be \textit{systematically} tampered by an adversary. We show that this is indeed possible for logistic regression models trained on ResNet feature embeddings and standard tabular fairness datasets and provide efficient attacks with backward-friendly implementations. Our work raises questions on the reliability of influence-based attributions in adversarial circumstances. Code will be made available at : \url{https://github.com/infinite-pursuits/influence-based-attributions-can-be-manipulated}.
Cite
Text
Yadav et al. "Influence-Based Attributions Can Be Manipulated." NeurIPS 2024 Workshops: RegML, 2024.Markdown
[Yadav et al. "Influence-Based Attributions Can Be Manipulated." NeurIPS 2024 Workshops: RegML, 2024.](https://mlanthology.org/neuripsw/2024/yadav2024neuripsw-influencebased/)BibTeX
@inproceedings{yadav2024neuripsw-influencebased,
title = {{Influence-Based Attributions Can Be Manipulated}},
author = {Yadav, Chhavi and Wu, Ruihan and Chaudhuri, Kamalika},
booktitle = {NeurIPS 2024 Workshops: RegML},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/yadav2024neuripsw-influencebased/}
}