A Graph Similarity for Deep Learning
Abstract
Graph neural networks (GNNs) have been successful in learning representations from graphs. Many popular GNNs follow the pattern of aggregate-transform: they aggregate the neighbors' attributes and then transform the results of aggregation with a learnable function. Analyses of these GNNs explain which pairs of non-identical graphs have different representations. However, we still lack an understanding of how similar these representations will be. We adopt kernel distance and propose transform-sum-cat as an alternative to aggregate-transform to reflect the continuous similarity between the node neighborhoods in the neighborhood aggregation. The idea leads to a simple and efficient graph similarity, which we name Weisfeiler-Leman similarity (WLS). In contrast to existing graph kernels, WLS is easy to implement with common deep learning frameworks. In graph classification experiments, transform-sum-cat significantly outperforms other neighborhood aggregation methods from popular GNN models. We also develop a simple and fast GNN model based on transform-sum-cat, which obtains, in comparison with widely used GNN models, (1) a higher accuracy in node classification, (2) a lower absolute error in graph regression, and (3) greater stability in adversarial training of graph generation.
Cite
Text
Ok. "A Graph Similarity for Deep Learning." Neural Information Processing Systems, 2020.Markdown
[Ok. "A Graph Similarity for Deep Learning." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/ok2020neurips-graph/)BibTeX
@inproceedings{ok2020neurips-graph,
title = {{A Graph Similarity for Deep Learning}},
author = {Ok, Seongmin},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/ok2020neurips-graph/}
}