Graph Transformation Augmentation for Contrastive Learning of Graph-Level Representation: An Initial Exploration

Abstract

Contrastive learning on the image data becomes a representative method of self-supervised learning to pre-train a neural encoder from data or model perspective(s). However, the data-perspective method in the graph domain is less explored because graph data augmentation is not as mature as image data augmentation. In this paper, we propose a transformation-based graph data augmentation, which is named Graph Transformation Augmentation (GTA). GTA will preserve the information of the graph spectrum instead of the subgraph information. GTA has two types: Permutation Augmentation and Orthonormal Augmentation. Finally, we experimentally validate the workability of GTA on self-supervised representation learning, and GTA unintuitively preserves the graph semantics.

Cite

Text

Li and Pei. "Graph Transformation Augmentation for Contrastive Learning of Graph-Level Representation: An Initial Exploration." NeurIPS 2024 Workshops: Compression, 2024.

Markdown

[Li and Pei. "Graph Transformation Augmentation for Contrastive Learning of Graph-Level Representation: An Initial Exploration." NeurIPS 2024 Workshops: Compression, 2024.](https://mlanthology.org/neuripsw/2024/li2024neuripsw-graph/)

BibTeX

@inproceedings{li2024neuripsw-graph,
  title     = {{Graph Transformation Augmentation for Contrastive Learning of Graph-Level Representation: An Initial Exploration}},
  author    = {Li, Tianchao and Pei, Yulong},
  booktitle = {NeurIPS 2024 Workshops: Compression},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/li2024neuripsw-graph/}
}