Perturbation Augmentation for Fairer NLP

Abstract

Unwanted and often harmful social biases are becoming ever more salient in NLP research, affecting both models and datasets. In this work, we ask whether training on demographically perturbed data leads to fairer language models. We collect a large dataset of human annotated text perturbations and train a neural perturbation model, which we show outperforms heuristic alternatives. We find that (i) language models (LMs) pre-trained on demographically perturbed corpora are typically more fair, and (ii) LMs finetuned on perturbed GLUE datasets exhibit less demographic bias on downstream tasks, and (iii) fairness improvements do not come at the expense of performance on downstream tasks. Lastly, we discuss outstanding questions about how best to evaluate the (un)fairness of large language models. We hope that this exploration of neural demographic perturbation will help drive more improvement towards fairer NLP.

Cite

Text

Qian et al. "Perturbation Augmentation for Fairer NLP." NeurIPS 2022 Workshops: RobustSeq, 2022.

Markdown

[Qian et al. "Perturbation Augmentation for Fairer NLP." NeurIPS 2022 Workshops: RobustSeq, 2022.](https://mlanthology.org/neuripsw/2022/qian2022neuripsw-perturbation/)

BibTeX

@inproceedings{qian2022neuripsw-perturbation,
  title     = {{Perturbation Augmentation for Fairer NLP}},
  author    = {Qian, Rebecca and Ross, Candace and Fernandes, Jude and Smith, Eric Michael and Kiela, Douwe and Williams, Adina},
  booktitle = {NeurIPS 2022 Workshops: RobustSeq},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/qian2022neuripsw-perturbation/}
}