Multilingual Alignment of Contextual Word Representations

Abstract

We propose procedures for evaluating and strengthening contextual embedding alignment and show that they are useful in analyzing and improving multilingual BERT. In particular, after our proposed alignment procedure, BERT exhibits significantly improved zero-shot performance on XNLI compared to the base model, remarkably matching pseudo-fully-supervised translate-train models for Bulgarian and Greek. Further, to measure the degree of alignment, we introduce a contextual version of word retrieval and show that it correlates well with downstream zero-shot transfer. Using this word retrieval task, we also analyze BERT and find that it exhibits systematic deficiencies, e.g. worse alignment for open-class parts-of-speech and word pairs written in different scripts, that are corrected by the alignment procedure. These results support contextual alignment as a useful concept for understanding large multilingual pre-trained models.

Cite

Text

Cao et al. "Multilingual Alignment of Contextual Word Representations." International Conference on Learning Representations, 2020.

Markdown

[Cao et al. "Multilingual Alignment of Contextual Word Representations." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/cao2020iclr-multilingual/)

BibTeX

@inproceedings{cao2020iclr-multilingual,
  title     = {{Multilingual Alignment of Contextual Word Representations}},
  author    = {Cao, Steven and Kitaev, Nikita and Klein, Dan},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/cao2020iclr-multilingual/}
}