Socialized Word Embeddings

Abstract

Word embeddings have attracted a lot of attention. On social media, each user’s language use can be significantly affected by the user’s friends. In this paper, we propose a socialized word embedding algorithm which can consider both user’s personal characteristics of language use and the user’s social relationship on social media. To incorporate personal characteristics, we propose to use a user vector to represent each user. Then for each user, the word embeddings are trained based on each user’s corpus by combining the global word vectors and local user vector. To incorporate social relationship, we add a regularization term to impose similarity between two friends. In this way, we can train the global word vectors and user vectors jointly. To demonstrate the effectiveness, we used the latest large-scale Yelp data to train our vectors, and designed several experiments to show how user vectors affect the results.

Cite

Text

Zeng et al. "Socialized Word Embeddings." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/547

Markdown

[Zeng et al. "Socialized Word Embeddings." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/zeng2017ijcai-socialized/) doi:10.24963/IJCAI.2017/547

BibTeX

@inproceedings{zeng2017ijcai-socialized,
  title     = {{Socialized Word Embeddings}},
  author    = {Zeng, Ziqian and Yin, Yichun and Song, Yangqiu and Zhang, Ming},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2017},
  pages     = {3915-3921},
  doi       = {10.24963/IJCAI.2017/547},
  url       = {https://mlanthology.org/ijcai/2017/zeng2017ijcai-socialized/}
}