Learning Graph Structure from Convolutional Mixtures

Abstract

Machine learning frameworks such as graph neural networks typically rely on a given, fixed graph to exploit relational inductive biases and thus effectively learn from network data. However, when said graphs are (partially) unobserved, noisy, or dynamic, the problem of inferring graph structure from data becomes relevant. In this paper, we postulate a graph convolutional relationship between the observed and latent graphs, and formulate the graph structure learning task as a network inverse (deconvolution) problem. In lieu of eigendecomposition-based spectral methods or iterative optimization solutions, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN). GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive as well as node permutation equivariant. We corroborate GDN's superior graph learning performance and its generalization to larger graphs using synthetic data in supervised settings. Moreover, we demonstrate the robustness and representation power of GDNs on real world neuroimaging and social network datasets.

Cite

Text

Wasserman et al. "Learning Graph Structure from Convolutional Mixtures." Transactions on Machine Learning Research, 2023.

Markdown

[Wasserman et al. "Learning Graph Structure from Convolutional Mixtures." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/wasserman2023tmlr-learning/)

BibTeX

@article{wasserman2023tmlr-learning,
  title     = {{Learning Graph Structure from Convolutional Mixtures}},
  author    = {Wasserman, Max and Sihag, Saurabh and Mateos, Gonzalo and Ribeiro, Alejandro},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/wasserman2023tmlr-learning/}
}