Letting Uncertainty Guide Your Multimodal Machine Translation

Abstract

Multimodal Machine Translation (MMT) leverages additional modalities, such as visual data, to enhance translation accuracy and resolve linguistic ambiguities inherent in text-only approaches. Recent advancements predominantly focus on integrating image information via attention mechanisms or feature fusion techniques. However, current approaches lack explicit mechanisms to quantify and manage the uncertainty during translation process, resulting in the utilization of image information being a black box. This makes it difficult to effectively address the issues of incomplete utilization of visual information and even potential degradation of translation quality when using visual information.To address these challenges, we introduce a novel Uncertainty-Guided Multimodal Machine Translation (UG-MMT) framework that redefines how translation systems handle ambiguity through systematic uncertainty reduction. Designed with plug-and-play flexibility, our framework enables seamless integration into existing MMT systems, requiring minimal modification while delivering significant performance gains.

Cite

Text

Liu et al. "Letting Uncertainty Guide Your Multimodal Machine Translation." Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, 2025.

Markdown

[Liu et al. "Letting Uncertainty Guide Your Multimodal Machine Translation." Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, 2025.](https://mlanthology.org/uai/2025/liu2025uai-letting/)

BibTeX

@inproceedings{liu2025uai-letting,
  title     = {{Letting Uncertainty Guide Your Multimodal Machine Translation}},
  author    = {Liu, Wuyi and Gao, Yue and Mao, Yige and Zhao, Jing},
  booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence},
  year      = {2025},
  pages     = {2701-2710},
  volume    = {286},
  url       = {https://mlanthology.org/uai/2025/liu2025uai-letting/}
}