Kernelized Bayesian SoftMax for Text Generation
Abstract
Neural models for text generation require a softmax layer with proper token embeddings during the decoding phase. Most existing approaches adopt single point embedding for each token. However, a word may have multiple senses according to different context, some of which might be distinct. In this paper, we propose KerBS, a novel approach for learning better embeddings for text generation. KerBS embodies two advantages: (a) it employs a Bayesian composition of embeddings for words with multiple senses; (b) it is adaptive to semantic variances of words and robust to rare sentence context by imposing learned kernels to capture the closeness of words (senses) in the embedding space. Empirical studies show that KerBS significantly boosts the performance of several text generation tasks.
Cite
Text
Miao et al. "Kernelized Bayesian SoftMax for Text Generation." Neural Information Processing Systems, 2019.Markdown
[Miao et al. "Kernelized Bayesian SoftMax for Text Generation." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/miao2019neurips-kernelized/)BibTeX
@inproceedings{miao2019neurips-kernelized,
title = {{Kernelized Bayesian SoftMax for Text Generation}},
author = {Miao, Ning and Zhou, Hao and Zhao, Chengqi and Shi, Wenxian and Li, Lei},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {12508-12518},
url = {https://mlanthology.org/neurips/2019/miao2019neurips-kernelized/}
}