A Provable Framework of Learning Graph Embeddings via Summarization
Abstract
Given a large graph, can we learn its node embeddings from a smaller summary graph? What is the relationship between embeddings learned from original graphs and their summary graphs? Graph representation learning plays an important role in many graph mining applications, but learning em-beddings of large-scale graphs remains a challenge. Recent works try to alleviate it via graph summarization, which typ-ically includes the three steps: reducing the graph size by combining nodes and edges into supernodes and superedges,learning the supernode embedding on the summary graph and then restoring the embeddings of the original nodes. How-ever, the justification behind those steps is still unknown. In this work, we propose GELSUMM, a well-formulated graph embedding learning framework based on graph sum-marization, in which we show the theoretical ground of learn-ing from summary graphs and the restoration with the three well-known graph embedding approaches in a closed form.Through extensive experiments on real-world datasets, we demonstrate that our methods can learn graph embeddings with matching or better performance on downstream tasks.This work provides theoretical analysis for learning node em-beddings via summarization and helps explain and under-stand the mechanism of the existing works.
Cite
Text
Zhou et al. "A Provable Framework of Learning Graph Embeddings via Summarization." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I4.25621Markdown
[Zhou et al. "A Provable Framework of Learning Graph Embeddings via Summarization." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/zhou2023aaai-provable/) doi:10.1609/AAAI.V37I4.25621BibTeX
@inproceedings{zhou2023aaai-provable,
title = {{A Provable Framework of Learning Graph Embeddings via Summarization}},
author = {Zhou, Houquan and Liu, Shenghua and Koutra, Danai and Shen, Huawei and Cheng, Xueqi},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {4946-4953},
doi = {10.1609/AAAI.V37I4.25621},
url = {https://mlanthology.org/aaai/2023/zhou2023aaai-provable/}
}