Guiding LLM Decision-Making with Fairness Reward Models
Abstract
Large language models are increasingly used to support high-stakes decisions, potentially influencing who is granted bail or receives a loan. Naive chain-of-thought sampling can improve average decision accuracy, but has also been shown to amplify unfair bias. To address this challenge and enable the trustworthy use of reasoning models in high-stakes decision-making, we propose a framework for training a generalizable Fairness Reward Model (FRM). Our model assigns a fairness score to LLM reasoning, enabling the system to down-weight biased trajectories and favor equitable ones when aggregating decisions across reasoning chains. We show that a single Fairness Reward Model, trained on weakly supervised, LLM-annotated examples of biased versus unbiased reasoning, transfers across tasks, domains, and model families without additional fine-tuning. When applied to real-world decision-making tasks including recidivism prediction and social media moderation, our approach consistently improves fairness while matching, or even surpassing, baseline accuracy.
Cite
Text
Hall et al. "Guiding LLM Decision-Making with Fairness Reward Models." Advances in Neural Information Processing Systems, 2025.Markdown
[Hall et al. "Guiding LLM Decision-Making with Fairness Reward Models." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/hall2025neurips-guiding/)BibTeX
@inproceedings{hall2025neurips-guiding,
title = {{Guiding LLM Decision-Making with Fairness Reward Models}},
author = {Hall, Zara and Subbiah, Melanie and Zollo, Thomas P and McKeown, Kathleen and Zemel, Richard},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/hall2025neurips-guiding/}
}