Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and Ex-Post Fairness

Abstract

In learning-to-rank (LTR), optimizing only the relevance (or the expected ranking utility) can cause representational harm to certain categories of items. We propose a novel objective that maximizes expected relevance only over those rankings that satisfy given representation constraints to ensure ex-post fairness. Building upon recent work on an efficient sampler for ex-post group-fair rankings, we propose a group-fair Plackett-Luce model and show that it can be efficiently optimized for our objective in the LTR framework. Experiments on three real-world datasets show that our algorithm guarantees fairness alongside usually having better relevance compared to the LTR baselines. In addition, our algorithm also achieves better relevance than post-processing baselines which also ensure ex-post fairness. Further, when implicit bias is injected into the training data, our algorithm typically outperforms existing LTR baselines in relevance.

Cite

Text

Gorantla et al. "Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and Ex-Post Fairness." NeurIPS 2023 Workshops: OPT, 2023.

Markdown

[Gorantla et al. "Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and Ex-Post Fairness." NeurIPS 2023 Workshops: OPT, 2023.](https://mlanthology.org/neuripsw/2023/gorantla2023neuripsw-optimizing/)

BibTeX

@inproceedings{gorantla2023neuripsw-optimizing,
  title     = {{Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and Ex-Post Fairness}},
  author    = {Gorantla, Sruthi and Bhansali, Eshaan and Deshpande, Amit and Louis, Anand},
  booktitle = {NeurIPS 2023 Workshops: OPT},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/gorantla2023neuripsw-optimizing/}
}