Operationalising Rawlsian Ethics for Fairness in Norm Learning Agents

Abstract

Social norms are standards of behaviour common in a society. However, when agents make decisions without considering how others are impacted, norms can emerge that lead to the subjugation of certain agents. We present RAWL·E, a method to create ethical norm-learning agents. RAWL·E agents operationalise maximin, a fairness principle from Rawlsian ethics, in their decision-making processes to promote ethical norms by balancing societal well-being with individual goals. We evaluate RAWL·E agents in simulated harvesting scenarios. We find that norms emerging in RAWL·E agent societies enhance social welfare, fairness, and robustness, and yield higher minimum experience compared to those that emerge in agent societies that do not implement Rawlsian ethics.

Cite

Text

Woodgate et al. "Operationalising Rawlsian Ethics for Fairness in Norm Learning Agents." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I25.34837

Markdown

[Woodgate et al. "Operationalising Rawlsian Ethics for Fairness in Norm Learning Agents." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/woodgate2025aaai-operationalising/) doi:10.1609/AAAI.V39I25.34837

BibTeX

@inproceedings{woodgate2025aaai-operationalising,
  title     = {{Operationalising Rawlsian Ethics for Fairness in Norm Learning Agents}},
  author    = {Woodgate, Jessica and Marshall, Paul and Ajmeri, Nirav},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {26382-26390},
  doi       = {10.1609/AAAI.V39I25.34837},
  url       = {https://mlanthology.org/aaai/2025/woodgate2025aaai-operationalising/}
}