GraphRNN Revisited: An Ablation Study and Extensions for Directed Acyclic Graphs

Abstract

GraphRNN is a deep learning-based architecture proposed by You et al. for learning generative models for graphs. We replicate the results of You et al. using a reproduced implementation of the GraphRNN architecture and evaluate this against baseline models using new metrics. Through an ablation study, we find that the BFS traversal suggested by You et al. to collapse representations of isomorphic graphs contributes significantly to model performance. Additionally, we extend GraphRNN to generate directed acyclic graphs by replacing the BFS traversal with a topological sort. We demonstrate that this method improves significantly over a directed-multiclass variant of GraphRNN on a real-world dataset.

Cite

Text

Ravichandran et al. "GraphRNN Revisited: An Ablation Study and Extensions for Directed Acyclic Graphs." NeurIPS 2023 Workshops: GLFrontiers, 2023.

Markdown

[Ravichandran et al. "GraphRNN Revisited: An Ablation Study and Extensions for Directed Acyclic Graphs." NeurIPS 2023 Workshops: GLFrontiers, 2023.](https://mlanthology.org/neuripsw/2023/ravichandran2023neuripsw-graphrnn/)

BibTeX

@inproceedings{ravichandran2023neuripsw-graphrnn,
  title     = {{GraphRNN Revisited: An Ablation Study and Extensions for Directed Acyclic Graphs}},
  author    = {Ravichandran, Maya and Koch, Mark and Das, Taniya and Khatri, Nikhil},
  booktitle = {NeurIPS 2023 Workshops: GLFrontiers},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/ravichandran2023neuripsw-graphrnn/}
}