Graph Condensation for Graph Neural Networks

Abstract

Given the prevalence of large-scale graphs in real-world applications, the storage and time for training neural models have raised increasing concerns. To alleviate the concerns, we propose and study the problem of graph condensation for graph neural networks (GNNs). Specifically, we aim to condense the large, original graph into a small, synthetic and highly-informative graph, such that GNNs trained on the small graph and large graph have comparable performance. We approach the condensation problem by imitating the GNN training trajectory on the original graph through the optimization of a gradient matching loss and design a strategy to condense node futures and structural information simultaneously. Extensive experiments have demonstrated the effectiveness of the proposed framework in condensing different graph datasets into informative smaller graphs. In particular, we are able to approximate the original test accuracy by 95.3\% on Reddit, 99.8\% on Flickr and 99.0\% on Citeseer, while reducing their graph size by more than 99.9\%, and the condensed graphs can be used to train various GNN architectures.

Cite

Text

Jin et al. "Graph Condensation for Graph Neural Networks." International Conference on Learning Representations, 2022.

Markdown

[Jin et al. "Graph Condensation for Graph Neural Networks." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/jin2022iclr-graph/)

BibTeX

@inproceedings{jin2022iclr-graph,
  title     = {{Graph Condensation for Graph Neural Networks}},
  author    = {Jin, Wei and Zhao, Lingxiao and Zhang, Shichang and Liu, Yozen and Tang, Jiliang and Shah, Neil},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/jin2022iclr-graph/}
}