EDGE++: Improved Training and Sampling of EDGE

Abstract

Traditional graph-generative models like the Stochastic-Block Model (SBM) fall short in capturing complex structures inherent in large graphs. Recently developed deep learning models like NetGAN, CELL, and Variational Graph Autoencoders have made progress but face limitations in replicating key graph statistics. Diffusion-based methods such as EDGE have emerged as promising alternatives, however, they present challenges in computational efficiency and generative performance. In this paper, we propose enhancements to the EDGE model to address these issues. Specifically, we introduce a degree-specific noise schedule that optimizes the number of active nodes at each timestep, significantly reducing memory consumption. Additionally, we present an improved sampling scheme that fine-tunes the generative process, allowing for better control over the similarity between the synthesized and the true network. Our experimental results demonstrate that the proposed modifications not only improve the efficiency but also enhance the accuracy of the generated graphs, offering a robust and scalable solution for graph generation tasks.

Cite

Text

Chen et al. "EDGE++: Improved Training and Sampling of EDGE." NeurIPS 2023 Workshops: SyntheticData4ML, 2023.

Markdown

[Chen et al. "EDGE++: Improved Training and Sampling of EDGE." NeurIPS 2023 Workshops: SyntheticData4ML, 2023.](https://mlanthology.org/neuripsw/2023/chen2023neuripsw-edge-a/)

BibTeX

@inproceedings{chen2023neuripsw-edge-a,
  title     = {{EDGE++: Improved Training and Sampling of EDGE}},
  author    = {Chen, Xiaohui and Wu, Mingyang and Liu, Liping},
  booktitle = {NeurIPS 2023 Workshops: SyntheticData4ML},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/chen2023neuripsw-edge-a/}
}