Interpreting Gender Bias in Neural Machine Translation: Multilingual Architecture Matters

Abstract

Multilingual neural machine translation architectures mainly differ in the number of sharing modules and parameters applied among languages. In this paper, and from an algorithmic perspective, we explore whether the chosen architecture, when trained with the same data, influences the level of gender bias. Experiments conducted in three language pairs show that language-specific encoder-decoders exhibit less bias than the shared architecture. We propose two methods for interpreting and studying gender bias in machine translation based on source embeddings and attention. Our analysis shows that, in the language-specific case, the embeddings encode more gender information, and their attention is more diverted. Both behaviors help in mitigating gender bias.

Cite

Text

Costa-jussà et al. "Interpreting Gender Bias in Neural Machine Translation: Multilingual Architecture Matters." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I11.21442

Markdown

[Costa-jussà et al. "Interpreting Gender Bias in Neural Machine Translation: Multilingual Architecture Matters." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/costajussa2022aaai-interpreting/) doi:10.1609/AAAI.V36I11.21442

BibTeX

@inproceedings{costajussa2022aaai-interpreting,
  title     = {{Interpreting Gender Bias in Neural Machine Translation: Multilingual Architecture Matters}},
  author    = {Costa-jussà, Marta R. and Escolano, Carlos and Basta, Christine and Ferrando, Javier and Batlle, Roser and Kharitonova, Ksenia},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {11855-11863},
  doi       = {10.1609/AAAI.V36I11.21442},
  url       = {https://mlanthology.org/aaai/2022/costajussa2022aaai-interpreting/}
}