On the Adversarial Robustness of Causal Algorithmic Recourse

Abstract

Algorithmic recourse seeks to provide actionable recommendations for individuals to overcome unfavorable classification outcomes from automated decision-making systems. Recourse recommendations should ideally be robust to reasonably small uncertainty in the features of the individual seeking recourse. In this work, we formulate the adversarially robust recourse problem and show that recourse methods that offer minimally costly recourse fail to be robust. We then present methods for generating adversarially robust recourse for linear and for differentiable classifiers. Finally, we show that regularizing the decision-making classifier to behave locally linearly and to rely more strongly on actionable features facilitates the existence of adversarially robust recourse.

Cite

Text

Dominguez-Olmedo et al. "On the Adversarial Robustness of Causal Algorithmic Recourse." International Conference on Machine Learning, 2022.

Markdown

[Dominguez-Olmedo et al. "On the Adversarial Robustness of Causal Algorithmic Recourse." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/dominguezolmedo2022icml-adversarial/)

BibTeX

@inproceedings{dominguezolmedo2022icml-adversarial,
  title     = {{On the Adversarial Robustness of Causal Algorithmic Recourse}},
  author    = {Dominguez-Olmedo, Ricardo and Karimi, Amir H and Schölkopf, Bernhard},
  booktitle = {International Conference on Machine Learning},
  year      = {2022},
  pages     = {5324-5342},
  volume    = {162},
  url       = {https://mlanthology.org/icml/2022/dominguezolmedo2022icml-adversarial/}
}