Exploring the Design Space of Generative Diffusion Processes for Sparse Graphs
Abstract
We extend score-based generative diffusion processes (GDPs) to sparse graphs and other inherently discrete data, with a focus on scalability. GDPs apply diffusion to training samples, then learn a reverse process generating new samples out of noise. Previous work applying GDPs to discrete data effectively relax discrete variables to continuous ones. Our approach is different: we consider jump diffusion (i.e., diffusion with punctual discontinuities) in $\mathbb{R}^d \times \mathcal{G}$ where $\mathcal{G}$ models discrete components of the data. We focus our attention on sparse graphs: our \textsc{Dissolve} process gradually breaks apart a graph $(V,E) \in \mathcal{G}$ in a certain number of distinct jump events. This confers significant advantages compared to GDPs that use less efficient representations and/or that destroy the graph information in a sudden manner. Gaussian kernels allow for efficient training with denoising score matching; standard GDP methods can be adapted with just an extra argument to the score function. We consider improvement opportunities for \textsc{Dissolve} and discuss necessary conditions to generalize to other kinds of inherently discrete data.
Cite
Text
Noel and Rodriguez. "Exploring the Design Space of Generative Diffusion Processes for Sparse Graphs." NeurIPS 2022 Workshops: SBM, 2022.Markdown
[Noel and Rodriguez. "Exploring the Design Space of Generative Diffusion Processes for Sparse Graphs." NeurIPS 2022 Workshops: SBM, 2022.](https://mlanthology.org/neuripsw/2022/noel2022neuripsw-exploring/)BibTeX
@inproceedings{noel2022neuripsw-exploring,
title = {{Exploring the Design Space of Generative Diffusion Processes for Sparse Graphs}},
author = {Noel, Pierre-Andre and Rodriguez, Pau},
booktitle = {NeurIPS 2022 Workshops: SBM},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/noel2022neuripsw-exploring/}
}