Mufu: Multilingual Fused Learning for Low-Resource Translation with LLM

Abstract

Multilingual large language models (LLMs) are great translators, but this is largely limited to high-resource languages. For many LLMs, translating in and out of low-resource languages remains a challenging task. To maximize data efficiency in this low-resource setting, we introduce Mufu, which includes a selection of automatically generated multilingual candidates and an instruction to correct inaccurate translations in the prompt. Mufu prompts turn a translation task into a postediting one, and seek to harness the LLM’s reasoning capability with auxiliary translation candidates, from which the model is required to assess the input quality, align the semantics cross-lingually, copy from relevant inputs and override instances that are incorrect. Our experiments on En-XX translations over the Flores-200 dataset show LLMs finetuned against Mufu-style prompts are robust to poor quality auxiliary translation candidates, achieving performance superior to NLLB 1.3B distilled model in 64% of low- and very-low-resource language pairs. We then distill these models to reduce inference cost, while maintaining on average 3.1 chrF improvement over finetune-only baseline in low-resource translations.

Cite

Text

Lim et al. "Mufu:  Multilingual Fused Learning for Low-Resource Translation with LLM." International Conference on Learning Representations, 2025.

Markdown

[Lim et al. "Mufu:  Multilingual Fused Learning for Low-Resource Translation with LLM." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/lim2025iclr-mufu/)

BibTeX

@inproceedings{lim2025iclr-mufu,
  title     = {{Mufu:  Multilingual Fused Learning for Low-Resource Translation with LLM}},
  author    = {Lim, Zheng Wei and Gupta, Nitish and Yu, Honglin and Cohn, Trevor},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/lim2025iclr-mufu/}
}