A Comparison of Architectures and Pretraining Methods for Contextualized Multilingual Word Embeddings
Abstract
The lack of annotated data in many languages is a well-known challenge within the field of multilingual natural language processing (NLP). Therefore, many recent studies focus on zero-shot transfer learning and joint training across languages to overcome data scarcity for low-resource languages. In this work we (i) perform a comprehensive comparison of state-of-the-art multilingual word and sentence encoders on the tasks of named entity recognition (NER) and part of speech (POS) tagging; and (ii) propose a new method for creating multilingual contextualized word embeddings, compare it to multiple baselines and show that it performs at or above state-of-the-art level in zero-shot transfer settings. Finally, we show that our method allows for better knowledge sharing across languages in a joint training setting.
Cite
Text
van der Heijden et al. "A Comparison of Architectures and Pretraining Methods for Contextualized Multilingual Word Embeddings." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I05.6443Markdown
[van der Heijden et al. "A Comparison of Architectures and Pretraining Methods for Contextualized Multilingual Word Embeddings." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/vanderheijden2020aaai-comparison/) doi:10.1609/AAAI.V34I05.6443BibTeX
@inproceedings{vanderheijden2020aaai-comparison,
title = {{A Comparison of Architectures and Pretraining Methods for Contextualized Multilingual Word Embeddings}},
author = {van der Heijden, Niels and Abnar, Samira and Shutova, Ekaterina},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {9090-9097},
doi = {10.1609/AAAI.V34I05.6443},
url = {https://mlanthology.org/aaai/2020/vanderheijden2020aaai-comparison/}
}