A Multiplicative Model for Learning Distributed Text-Based Attribute Representations
Abstract
In this paper we propose a general framework for learning distributed representations of attributes: characteristics of text whose representations can be jointly learned with word embeddings. Attributes can correspond to a wide variety of concepts, such as document indicators (to learn sentence vectors), language indicators (to learn distributed language representations), meta-data and side information (such as the age, gender and industry of a blogger) or representations of authors. We describe a third-order model where word context and attribute vectors interact multiplicatively to predict the next word in a sequence. This leads to the notion of conditional word similarity: how meanings of words change when conditioned on different attributes. We perform several experimental tasks including sentiment classification, cross-lingual document classification, and blog authorship attribution. We also qualitatively evaluate conditional word neighbours and attribute-conditioned text generation.
Cite
Text
Kiros et al. "A Multiplicative Model for Learning Distributed Text-Based Attribute Representations." Neural Information Processing Systems, 2014.Markdown
[Kiros et al. "A Multiplicative Model for Learning Distributed Text-Based Attribute Representations." Neural Information Processing Systems, 2014.](https://mlanthology.org/neurips/2014/kiros2014neurips-multiplicative/)BibTeX
@inproceedings{kiros2014neurips-multiplicative,
title = {{A Multiplicative Model for Learning Distributed Text-Based Attribute Representations}},
author = {Kiros, Ryan and Zemel, Richard and Salakhutdinov, Ruslan},
booktitle = {Neural Information Processing Systems},
year = {2014},
pages = {2348-2356},
url = {https://mlanthology.org/neurips/2014/kiros2014neurips-multiplicative/}
}