Undesirable Biases in NLP: Addressing Challenges of Measurement

Abstract

As Large Language Models and Natural Language Processing (NLP) technology rapidly develop and spread into daily life, it becomes crucial to anticipate how their use could harm people. One problem that has received a lot of attention in recent years is that this technology has displayed harmful biases, from generating derogatory stereotypes to producing disparate outcomes for different social groups. Although a lot of effort has been invested in assessing and mitigating these biases, our methods of measuring the biases of NLP models have serious problems and it is often unclear what they actually measure. In this paper, we provide an interdisciplinary approach to discussing the issue of NLP model bias by adopting the lens of psychometrics — a field specialized in the measurement of concepts like bias that are not directly observable. In particular, we will explore two central notions from psychometrics, the construct validity and the reliability of measurement tools, and discuss how they can be applied in the context of measuring model bias. Our goal is to provide NLP practitioners with methodological tools for designing better bias measures, and to inspire them more generally to explore tools from psychometrics when working on bias measurement tools. This article appears in the AI & Society track.

Cite

Text

van der Wal et al. "Undesirable Biases in NLP: Addressing Challenges of Measurement." Journal of Artificial Intelligence Research, 2024. doi:10.1613/JAIR.1.15195

Markdown

[van der Wal et al. "Undesirable Biases in NLP: Addressing Challenges of Measurement." Journal of Artificial Intelligence Research, 2024.](https://mlanthology.org/jair/2024/vanderwal2024jair-undesirable/) doi:10.1613/JAIR.1.15195

BibTeX

@article{vanderwal2024jair-undesirable,
  title     = {{Undesirable Biases in NLP: Addressing Challenges of Measurement}},
  author    = {van der Wal, Oskar and Bachmann, Dominik and Leidinger, Alina and van Maanen, Leendert and Zuidema, Willem H. and Schulz, Katrin},
  journal   = {Journal of Artificial Intelligence Research},
  year      = {2024},
  pages     = {1-40},
  doi       = {10.1613/JAIR.1.15195},
  volume    = {79},
  url       = {https://mlanthology.org/jair/2024/vanderwal2024jair-undesirable/}
}