Overcoming Non-Monotonicity in Transducer-Based Streaming Generation

Abstract

Streaming generation models are utilized across fields, with the Transducer architecture being popular in industrial applications. However, its input-synchronous decoding mechanism presents challenges in tasks requiring non-monotonic alignments, such as simultaneous translation. In this research, we address this issue by integrating Transducer’s decoding with the history of input stream via a learnable monotonic attention. Our approach leverages the forward-backward algorithm to infer the posterior probability of alignments between the predictor states and input timestamps, which is then used to estimate the monotonic context representations, thereby avoiding the need to enumerate the exponentially large alignment space during training. Extensive experiments show that our MonoAttn-Transducer effectively handles non-monotonic alignments in streaming scenarios, offering a robust solution for complex generation tasks. Code is available at https://github.com/ictnlp/MonoAttn-Transducer.

Cite

Text

Ma et al. "Overcoming Non-Monotonicity in Transducer-Based Streaming Generation." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Ma et al. "Overcoming Non-Monotonicity in Transducer-Based Streaming Generation." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/ma2025icml-overcoming/)

BibTeX

@inproceedings{ma2025icml-overcoming,
  title     = {{Overcoming Non-Monotonicity in Transducer-Based Streaming Generation}},
  author    = {Ma, Zhengrui and Feng, Yang and Zhang, Min},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {41890-41906},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/ma2025icml-overcoming/}
}