FairDICE: Fairness-Driven Offline Multi-Objective Reinforcement Learning

Abstract

Multi-objective reinforcement learning (MORL) aims to optimize policies in the presence of conflicting objectives, where linear scalarization is commonly used to reduce vector-valued returns into scalar signals. While effective for certain preferences, this approach cannot capture fairness-oriented goals such as Nash social welfare or max-min fairness, which require nonlinear and non-additive trade-offs. Although several online algorithms have been proposed for specific fairness objectives, a unified approach for optimizing nonlinear welfare criteria in the offline setting—where learning must proceed from a fixed dataset—remains unexplored. In this work, we present FairDICE, the first offline MORL framework that directly optimizes nonlinear welfare objective. FairDICE leverages distribution correction estimation to jointly account for welfare maximization and distributional regularization, enabling stable and sample-efficient learning without requiring explicit preference weights or exhaustive weight search. Across multiple offline benchmarks, FairDICE demonstrates strong fairness-aware performance compared to existing baselines.

Cite

Text

Kim et al. "FairDICE: Fairness-Driven Offline Multi-Objective Reinforcement Learning." Advances in Neural Information Processing Systems, 2025.

Markdown

[Kim et al. "FairDICE: Fairness-Driven Offline Multi-Objective Reinforcement Learning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/kim2025neurips-fairdice/)

BibTeX

@inproceedings{kim2025neurips-fairdice,
  title     = {{FairDICE: Fairness-Driven Offline Multi-Objective Reinforcement Learning}},
  author    = {Kim, Woosung and Lee, Jinho and Lee, Jongmin and Lee, Byung-Jun},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/kim2025neurips-fairdice/}
}