Improving Sequence-to-Sequence Learning via Optimal Transport

Abstract

Sequence-to-sequence models are commonly trained via maximum likelihood estimation (MLE). However, standard MLE training considers a word-level objective, predicting the next word given the previous ground-truth partial sentence. This procedure focuses on modeling local syntactic patterns, and may fail to capture long-range semantic structure. We present a novel solution to alleviate these issues. Our approach imposes global sequence-level guidance via new supervision based on optimal transport, enabling the overall characterization and preservation of semantic features. We further show that this method can be understood as a Wasserstein gradient flow trying to match our model to the ground truth sequence distribution. Extensive experiments are conducted to validate the utility of the proposed approach, showing consistent improvements over a wide variety of NLP tasks, including machine translation, abstractive text summarization, and image captioning.

Cite

Text

Chen et al. "Improving Sequence-to-Sequence Learning via Optimal Transport." International Conference on Learning Representations, 2019.

Markdown

[Chen et al. "Improving Sequence-to-Sequence Learning via Optimal Transport." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/chen2019iclr-improving/)

BibTeX

@inproceedings{chen2019iclr-improving,
  title     = {{Improving Sequence-to-Sequence Learning via Optimal Transport}},
  author    = {Chen, Liqun and Zhang, Yizhe and Zhang, Ruiyi and Tao, Chenyang and Gan, Zhe and Zhang, Haichao and Li, Bai and Shen, Dinghan and Chen, Changyou and Carin, Lawrence},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/chen2019iclr-improving/}
}