ME: Modelling Ethical Values for Value Alignment

Abstract

Value alignment, at the intersection of moral philosophy and AI safety, is dedicated to ensuring that artificially intelligent (AI) systems align with a certain set of values. One challenge facing value alignment researchers is accurately translating these values into a machine readable format. In the case of reinforcement learning (RL), a popular method within value alignment, this requires designing a reward function which accurately defines the value of all state-action pairs. It is common for programmers to hand-set and manually tune these values. In this paper, we examine the challenges of hand-programming values into reward functions for value alignment, and propose mathematical models as an alternative grounding for reward function design in ethical scenarios. Experimental results demonstrate that our modelled-ethics approach offers a more consistent alternative and outperforms our hand-programmed reward functions.

Cite

Text

Rigley et al. "ME: Modelling Ethical Values for Value Alignment." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I26.34974

Markdown

[Rigley et al. "ME: Modelling Ethical Values for Value Alignment." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/rigley2025aaai-me/) doi:10.1609/AAAI.V39I26.34974

BibTeX

@inproceedings{rigley2025aaai-me,
  title     = {{ME: Modelling Ethical Values for Value Alignment}},
  author    = {Rigley, Eryn and Chapman, Adriane and Evers, Christine and McNeill, William},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {27608-27616},
  doi       = {10.1609/AAAI.V39I26.34974},
  url       = {https://mlanthology.org/aaai/2025/rigley2025aaai-me/}
}