FairDistillation: Mitigating Stereotyping in Language Models
Abstract
Large pre-trained language models are successfully being used in a variety of tasks, across many languages. With this ever-increasing usage, the risk of harmful side effects also rises, for example by reproducing and reinforcing stereotypes. However, detecting and mitigating these harms is difficult to do in general and becomes computationally expensive when tackling multiple languages or when considering different biases. To address this, we present FairDistillation : a cross-lingual method based on knowledge distillation to construct smaller language models while controlling for specific biases. We found that our distillation method does not negatively affect the downstream performance on most tasks and successfully mitigates stereotyping and representational harms. We demonstrate that FairDistillation can create fairer language models at a considerably lower cost than alternative approaches.
Cite
Text
Delobelle and Berendt. "FairDistillation: Mitigating Stereotyping in Language Models." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2022. doi:10.1007/978-3-031-26390-3_37Markdown
[Delobelle and Berendt. "FairDistillation: Mitigating Stereotyping in Language Models." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2022.](https://mlanthology.org/ecmlpkdd/2022/delobelle2022ecmlpkdd-fairdistillation/) doi:10.1007/978-3-031-26390-3_37BibTeX
@inproceedings{delobelle2022ecmlpkdd-fairdistillation,
title = {{FairDistillation: Mitigating Stereotyping in Language Models}},
author = {Delobelle, Pieter and Berendt, Bettina},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2022},
pages = {638-654},
doi = {10.1007/978-3-031-26390-3_37},
url = {https://mlanthology.org/ecmlpkdd/2022/delobelle2022ecmlpkdd-fairdistillation/}
}