Pseudo-Label Training and Model Inertia in Neural Machine Translation

Abstract

Like many other machine learning applications, neural machine translation (NMT) benefits from over-parameterized deep neural models. However, these models have been observed to be brittle: NMT model predictions are sensitive to small input changes and can show significant variation across re-training or incremental model updates. This work studies a frequently used method in NMT, pseudo-label training (PLT), which is common to the related techniques of forward-translation (or self-training) and sequence-level knowledge distillation. While the effect of PLT on quality is well-documented, we highlight a lesser-known effect: PLT can enhance a model's stability to model updates and input perturbations, a set of properties we call \textit{model inertia}. We study inertia effects under different training settings and we identify distribution simplification as a mechanism behind the observed results.

Cite

Text

Hsu et al. "Pseudo-Label Training and Model Inertia in Neural Machine Translation." International Conference on Learning Representations, 2023.

Markdown

[Hsu et al. "Pseudo-Label Training and Model Inertia in Neural Machine Translation." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/hsu2023iclr-pseudolabel/)

BibTeX

@inproceedings{hsu2023iclr-pseudolabel,
  title     = {{Pseudo-Label Training and Model Inertia in Neural Machine Translation}},
  author    = {Hsu, Benjamin and Currey, Anna and Niu, Xing and Nadejde, Maria and Dinu, Georgiana},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/hsu2023iclr-pseudolabel/}
}