Robust Negative Sampling for Network Embedding

Abstract

Many recent network embedding algorithms use negative sampling (NS) to approximate a variant of the computationally expensive Skip-Gram neural network architecture (SGA) objective. In this paper, we provide theoretical arguments that reveal how NS can fail to properly estimate the SGA objective, and why it is not a suitable candidate for the network embedding problem as a distinct objective. We show NS can learn undesirable embeddings, as the result of the “Popular Neighbor Problem.” We use the theory to develop a new method “R-NS” that alleviates the problems of NS by using a more intelligent negative sampling scheme and careful penalization of the embeddings. R-NS is scalable to large-scale networks, and we empirically demonstrate the superiority of R-NS over NS for multi-label classification on a variety of real-world networks including social networks and language networks.

Cite

Text

Armandpour et al. "Robust Negative Sampling for Network Embedding." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33013191

Markdown

[Armandpour et al. "Robust Negative Sampling for Network Embedding." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/armandpour2019aaai-robust/) doi:10.1609/AAAI.V33I01.33013191

BibTeX

@inproceedings{armandpour2019aaai-robust,
  title     = {{Robust Negative Sampling for Network Embedding}},
  author    = {Armandpour, Mohammadreza and Ding, Patrick and Huang, Jianhua and Hu, Xia},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {3191-3198},
  doi       = {10.1609/AAAI.V33I01.33013191},
  url       = {https://mlanthology.org/aaai/2019/armandpour2019aaai-robust/}
}