A Survey on Low-Resource Neural Machine Translation

Abstract

Neural approaches have achieved state-of-the-art accuracy on machine translation but suffer from the high cost of collecting large scale parallel data. Thus, a lot of research has been conducted for neural machine translation (NMT) with very limited parallel data, i.e., the low-resource setting. In this paper, we provide a survey for low-resource NMT and classify related works into three categories according to the auxiliary data they used: (1) exploiting monolingual data of source and/or target languages, (2) exploiting data from auxiliary languages, and (3) exploiting multi-modal data. We hope that our survey can help researchers to better understand this field and inspire them to design better algorithms, and help industry practitioners to choose appropriate algorithms for their applications.

Cite

Text

Wang et al. "A Survey on Low-Resource Neural Machine Translation." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/629

Markdown

[Wang et al. "A Survey on Low-Resource Neural Machine Translation." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/wang2021ijcai-survey/) doi:10.24963/IJCAI.2021/629

BibTeX

@inproceedings{wang2021ijcai-survey,
  title     = {{A Survey on Low-Resource Neural Machine Translation}},
  author    = {Wang, Rui and Tan, Xu and Luo, Renqian and Qin, Tao and Liu, Tie-Yan},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {4636-4643},
  doi       = {10.24963/IJCAI.2021/629},
  url       = {https://mlanthology.org/ijcai/2021/wang2021ijcai-survey/}
}