Representation Degeneration Problem in Training Natural Language Generation Models
Abstract
We study an interesting problem in training neural network-based models for natural language generation tasks, which we call the \emph{representation degeneration problem}. We observe that when training a model for natural language generation tasks through likelihood maximization with the weight tying trick, especially with big training datasets, most of the learnt word embeddings tend to degenerate and be distributed into a narrow cone, which largely limits the representation power of word embeddings. We analyze the conditions and causes of this problem and propose a novel regularization method to address it. Experiments on language modeling and machine translation show that our method can largely mitigate the representation degeneration problem and achieve better performance than baseline algorithms.
Cite
Text
Gao et al. "Representation Degeneration Problem in Training Natural Language Generation Models." International Conference on Learning Representations, 2019.Markdown
[Gao et al. "Representation Degeneration Problem in Training Natural Language Generation Models." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/gao2019iclr-representation/)BibTeX
@inproceedings{gao2019iclr-representation,
title = {{Representation Degeneration Problem in Training Natural Language Generation Models}},
author = {Gao, Jun and He, Di and Tan, Xu and Qin, Tao and Wang, Liwei and Liu, Tieyan},
booktitle = {International Conference on Learning Representations},
year = {2019},
url = {https://mlanthology.org/iclr/2019/gao2019iclr-representation/}
}