Inducing Human-like Biases in Moral Reasoning Language Models

Abstract

In this work, we study the alignment (BrainScore) of large language models (LLMs) fine-tuned for moral reasoning on behavioral data and/or brain data of humans performing the same task. We also explore if fine-tuning several LLMs on the fMRI data of humans performing moral reasoning can improve the BrainScore. We fine-tune several LLMs (BERT, RoBERTa, DeBERTa) on moral reasoning behavioral data from the ETHICS benchmark Hendrycks et al. [2020], on the moral reasoning fMRI data from Koster-Hale et al. [2013], or on both. We study both the accuracy on the ETHICS benchmark and the BrainScores between model activations and fMRI data. While larger models generally performed better on both metrics, BrainScores did not significantly improve after fine-tuning.

Cite

Text

Meek et al. "Inducing Human-like Biases in Moral Reasoning Language Models." NeurIPS 2024 Workshops: UniReps, 2024.

Markdown

[Meek et al. "Inducing Human-like Biases in Moral Reasoning Language Models." NeurIPS 2024 Workshops: UniReps, 2024.](https://mlanthology.org/neuripsw/2024/meek2024neuripsw-inducing/)

BibTeX

@inproceedings{meek2024neuripsw-inducing,
  title     = {{Inducing Human-like Biases in Moral Reasoning Language Models}},
  author    = {Meek, Austin and Karpov, Artem and Cho, Seong Hah and Koopmanschap, Raymond and Farnik, Lucy and Cirstea, Bogdan-Ionut},
  booktitle = {NeurIPS 2024 Workshops: UniReps},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/meek2024neuripsw-inducing/}
}