Disentangled Multiplex Graph Representation Learning

Abstract

Unsupervised multiplex graph representation learning (UMGRL) has received increasing interest, but few works simultaneously focused on the common and private information extraction. In this paper, we argue that it is essential for conducting effective and robust UMGRL to extract complete and clean common information, as well as more-complementarity and less-noise private information. To achieve this, we first investigate disentangled representation learning for the multiplex graph to capture complete and clean common information, as well as design a contrastive constraint to preserve the complementarity and remove the noise in the private information. Moreover, we theoretically analyze that the common and private representations learned by our method are provably disentangled and contain more task-relevant and less task-irrelevant information to benefit downstream tasks. Extensive experiments verify the superiority of the proposed method in terms of different downstream tasks.

Cite

Text

Mo et al. "Disentangled Multiplex Graph Representation Learning." International Conference on Machine Learning, 2023.

Markdown

[Mo et al. "Disentangled Multiplex Graph Representation Learning." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/mo2023icml-disentangled/)

BibTeX

@inproceedings{mo2023icml-disentangled,
  title     = {{Disentangled Multiplex Graph Representation Learning}},
  author    = {Mo, Yujie and Lei, Yajie and Shen, Jialie and Shi, Xiaoshuang and Shen, Heng Tao and Zhu, Xiaofeng},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {24983-25005},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/mo2023icml-disentangled/}
}