On Learning Universal Representations Across Languages
Abstract
Recent studies have demonstrated the overwhelming advantage of cross-lingual pre-trained models (PTMs), such as multilingual BERT and XLM, on cross-lingual NLP tasks. However, existing approaches essentially capture the co-occurrence among tokens through involving the masked language model (MLM) objective with token-level cross entropy. In this work, we extend these approaches to learn sentence-level representations and show the effectiveness on cross-lingual understanding and generation. Specifically, we propose a Hierarchical Contrastive Learning (HiCTL) method to (1) learn universal representations for parallel sentences distributed in one or multiple languages and (2) distinguish the semantically-related words from a shared cross-lingual vocabulary for each sentence. We conduct evaluations on two challenging cross-lingual tasks, XTREME and machine translation. Experimental results show that the HiCTL outperforms the state-of-the-art XLM-R by an absolute gain of 4.2% accuracy on the XTREME benchmark as well as achieves substantial improvements on both of the high resource and low-resource English$\rightarrow$X translation tasks over strong baselines.
Cite
Text
Wei et al. "On Learning Universal Representations Across Languages." International Conference on Learning Representations, 2021.Markdown
[Wei et al. "On Learning Universal Representations Across Languages." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/wei2021iclr-learning/)BibTeX
@inproceedings{wei2021iclr-learning,
title = {{On Learning Universal Representations Across Languages}},
author = {Wei, Xiangpeng and Weng, Rongxiang and Hu, Yue and Xing, Luxi and Yu, Heng and Luo, Weihua},
booktitle = {International Conference on Learning Representations},
year = {2021},
url = {https://mlanthology.org/iclr/2021/wei2021iclr-learning/}
}