Attribute and Structure Preserving Graph Contrastive Learning
Abstract
Graph Contrastive Learning (GCL) has drawn much research interest due to its strong ability to capture both graph structure and node attribute information in a self-supervised manner. Current GCL methods usually adopt Graph Neural Networks (GNNs) as the base encoder, which typically relies on the homophily assumption of networks and overlooks node similarity in the attribute space. There are many scenarios where such assumption cannot be satisfied, or node similarity plays a crucial role. In order to design a more robust mechanism, we develop a novel attribute and structure preserving graph contrastive learning framework, named ASP, which comprehensively and efficiently preserves node attributes while exploiting graph structure. Specifically, we consider three different graph views in our framework, i.e., original view, attribute view, and global structure view. Then, we perform contrastive learning across three views in a joint fashion, mining comprehensive graph information. We validate the effectiveness of the proposed framework on various real-world networks with different levels of homophily. The results demonstrate the superior performance of our model over the representative baselines.
Cite
Text
Chen and Kou. "Attribute and Structure Preserving Graph Contrastive Learning." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I6.25858Markdown
[Chen and Kou. "Attribute and Structure Preserving Graph Contrastive Learning." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/chen2023aaai-attribute/) doi:10.1609/AAAI.V37I6.25858BibTeX
@inproceedings{chen2023aaai-attribute,
title = {{Attribute and Structure Preserving Graph Contrastive Learning}},
author = {Chen, Jialu and Kou, Gang},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {7024-7032},
doi = {10.1609/AAAI.V37I6.25858},
url = {https://mlanthology.org/aaai/2023/chen2023aaai-attribute/}
}