On the Privacy Risks of Algorithmic Recourse
Abstract
As predictive models are increasingly being employed to make consequential decisions, there is a growing emphasis on developing techniques that can provide algorithmic recourse to affected individuals. While such recourses can be immensely beneficial to affected individuals, potential adversaries could also exploit these recourses to compromise privacy. In this work, we make the first attempt at investigating if and how an adversary can leverage recourses to infer private information about the underlying model’s training data. To this end, we propose a series of novel membership inference attacks which leverage algorithmic recourse. More specifically, we extend the prior literature on membership inference attacks to the recourse setting by leveraging the distances between data instances and their corresponding counterfactuals output by state-of-the-art recourse methods. Extensive experimentation with real world and synthetic datasets demonstrates significant privacy leakage through recourses. Our work establishes unintended privacy leakage as an important risk in the widespread adoption of recourse methods.
Cite
Text
Pawelczyk et al. "On the Privacy Risks of Algorithmic Recourse." Artificial Intelligence and Statistics, 2023.Markdown
[Pawelczyk et al. "On the Privacy Risks of Algorithmic Recourse." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/pawelczyk2023aistats-privacy/)BibTeX
@inproceedings{pawelczyk2023aistats-privacy,
title = {{On the Privacy Risks of Algorithmic Recourse}},
author = {Pawelczyk, Martin and Lakkaraju, Himabindu and Neel, Seth},
booktitle = {Artificial Intelligence and Statistics},
year = {2023},
pages = {9680-9696},
volume = {206},
url = {https://mlanthology.org/aistats/2023/pawelczyk2023aistats-privacy/}
}