Cascaded Text Generation with Markov Transformers
Abstract
The two dominant approaches to neural text generation are fully autoregressive models, using serial beam search decoding, and non-autoregressive models, using parallel decoding with no output dependencies. This work proposes an autoregressive model with sub-linear parallel time generation. Noting that conditional random fields with bounded context can be decoded in parallel, we propose an efficient cascaded decoding approach for generating high-quality output. To parameterize this cascade, we introduce a Markov transformer, a variant of the popular fully autoregressive model that allows us to simultaneously decode with specific autoregressive context cutoffs. This approach requires only a small modification from standard autoregressive training, while showing competitive accuracy/speed tradeoff compared to existing methods on five machine translation datasets.
Cite
Text
Deng and Rush. "Cascaded Text Generation with Markov Transformers." Neural Information Processing Systems, 2020.Markdown
[Deng and Rush. "Cascaded Text Generation with Markov Transformers." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/deng2020neurips-cascaded/)BibTeX
@inproceedings{deng2020neurips-cascaded,
title = {{Cascaded Text Generation with Markov Transformers}},
author = {Deng, Yuntian and Rush, Alexander},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/deng2020neurips-cascaded/}
}