A Tensorized Transformer for Language Modeling

Abstract

Latest development of neural models has connected the encoder and decoder through a self-attention mechanism. In particular, Transformer, which is solely based on self-attention, has led to breakthroughs in Natural Language Processing (NLP) tasks. However, the multi-head attention mechanism, as a key component of Transformer, limits the effective deployment of the model to a resource-limited setting. In this paper, based on the ideas of tensor decomposition and parameters sharing, we propose a novel self-attention model (namely Multi-linear attention) with Block-Term Tensor Decomposition (BTD). We test and verify the proposed attention method on three language modeling tasks (i.e., PTB, WikiText-103 and One-billion) and a neural machine translation task (i.e., WMT-2016 English-German). Multi-linear attention can not only largely compress the model parameters but also obtain performance improvements, compared with a number of language modeling approaches, such as Transformer, Transformer-XL, and Transformer with tensor train decomposition.

Cite

Text

Ma et al. "A Tensorized Transformer for Language Modeling." Neural Information Processing Systems, 2019.

Markdown

[Ma et al. "A Tensorized Transformer for Language Modeling." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/ma2019neurips-tensorized/)

BibTeX

@inproceedings{ma2019neurips-tensorized,
  title     = {{A Tensorized Transformer for Language Modeling}},
  author    = {Ma, Xindian and Zhang, Peng and Zhang, Shuai and Duan, Nan and Hou, Yuexian and Zhou, Ming and Song, Dawei},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {2232-2242},
  url       = {https://mlanthology.org/neurips/2019/ma2019neurips-tensorized/}
}