Algorithmic Recourse for Long-Term Improvement

Abstract

Algorithmic recourse aims to provide a recourse action for altering an unfavorable prediction given by a model into a favorable one (e.g., loan approval). In practice, it is also desirable to ensure that an action makes the real-world outcome better (e.g., loan repayment). We call this requirement improvement. Unfortunately, existing methods cannot ensure improvement unless we know the true oracle. To address this issue, we propose a framework for suggesting improvement-oriented actions from a long-term perspective. Specifically, we introduce a new online learning task of assigning actions to a given sequence of instances. We assume that we can observe delayed feedback on whether the past suggested action achieved improvement. Using the feedback, we estimate an action that can achieve improvement for each instance. To solve this task, we propose two approaches based on contextual linear bandit and contextual Bayesian optimization. Experimental results demonstrated that our approaches could assign improvement-oriented actions to more instances than the existing methods.

Cite

Text

Kanamori et al. "Algorithmic Recourse for Long-Term Improvement." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Kanamori et al. "Algorithmic Recourse for Long-Term Improvement." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/kanamori2025icml-algorithmic/)

BibTeX

@inproceedings{kanamori2025icml-algorithmic,
  title     = {{Algorithmic Recourse for Long-Term Improvement}},
  author    = {Kanamori, Kentaro and Kobayashi, Ken and Hara, Satoshi and Takagi, Takuya},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {28849-28877},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/kanamori2025icml-algorithmic/}
}