On the Learning of Non-Autoregressive Transformers

Abstract

Non-autoregressive Transformer (NAT) is a family of text generation models, which aims to reduce the decoding latency by predicting the whole sentences in parallel. However, such latency reduction sacrifices the ability to capture left-to-right dependencies, thereby making NAT learning very challenging. In this paper, we present theoretical and empirical analyses to reveal the challenges of NAT learning and propose a unified perspective to understand existing successes. First, we show that simply training NAT by maximizing the likelihood can lead to an approximation of marginal distributions but drops all dependencies between tokens, where the dropped information can be measured by the dataset’s conditional total correlation. Second, we formalize many previous objectives in a unified framework and show that their success can be concluded as maximizing the likelihood on a proxy distribution, leading to a reduced information loss. Empirical studies show that our perspective can explain the phenomena in NAT learning and guide the design of new training methods.

Cite

Text

Huang et al. "On the Learning of Non-Autoregressive Transformers." International Conference on Machine Learning, 2022.

Markdown

[Huang et al. "On the Learning of Non-Autoregressive Transformers." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/huang2022icml-learning/)

BibTeX

@inproceedings{huang2022icml-learning,
  title     = {{On the Learning of Non-Autoregressive Transformers}},
  author    = {Huang, Fei and Tao, Tianhua and Zhou, Hao and Li, Lei and Huang, Minlie},
  booktitle = {International Conference on Machine Learning},
  year      = {2022},
  pages     = {9356-9376},
  volume    = {162},
  url       = {https://mlanthology.org/icml/2022/huang2022icml-learning/}
}