Diverse Embedding Neural Network Language Models
Abstract
We propose Diverse Embedding Neural Network (DENN), a novel architecture for language models (LMs). A DENNLM projects the input word history vector onto multiple diverse low-dimensional sub-spaces instead of a single higher-dimensional sub-space as in conventional feed-forward neural network LMs. We encourage these sub-spaces to be diverse during network training through an augmented loss function. Our language modeling experiments on the Penn Treebank data set show the performance benefit of using a DENNLM.
Cite
Text
Audhkhasi et al. "Diverse Embedding Neural Network Language Models." International Conference on Learning Representations, 2015.Markdown
[Audhkhasi et al. "Diverse Embedding Neural Network Language Models." International Conference on Learning Representations, 2015.](https://mlanthology.org/iclr/2015/audhkhasi2015iclr-diverse/)BibTeX
@inproceedings{audhkhasi2015iclr-diverse,
title = {{Diverse Embedding Neural Network Language Models}},
author = {Audhkhasi, Kartik and Sethy, Abhinav and Ramabhadran, Bhuvana},
booktitle = {International Conference on Learning Representations},
year = {2015},
url = {https://mlanthology.org/iclr/2015/audhkhasi2015iclr-diverse/}
}