Learning Latent Graph Structures and Their Uncertainty

Abstract

Within a prediction task, Graph Neural Networks (GNNs) use relational information as an inductive bias to enhance the model's accuracy. As task-relevant relations might be unknown, graph structure learning approaches have been proposed to learn them while solving the downstream prediction task. In this paper, we demonstrate that minimization of a point-prediction loss function, e.g., the mean absolute error, does not guarantee proper learning of the latent relational information and its associated uncertainty. Conversely, we prove that a suitable loss function on the stochastic model outputs simultaneously grants (i) the unknown adjacency matrix latent distribution and (ii) optimal performance on the prediction task. Finally, we propose a sampling-based method that solves this joint learning task. Empirical results validate our theoretical claims and demonstrate the effectiveness of the proposed approach.

Cite

Text

Manenti et al. "Learning Latent Graph Structures and Their Uncertainty." ICML 2024 Workshops: SPIGM, 2024.

Markdown

[Manenti et al. "Learning Latent Graph Structures and Their Uncertainty." ICML 2024 Workshops: SPIGM, 2024.](https://mlanthology.org/icmlw/2024/manenti2024icmlw-learning/)

BibTeX

@inproceedings{manenti2024icmlw-learning,
  title     = {{Learning Latent Graph Structures and Their Uncertainty}},
  author    = {Manenti, Alessandro and Zambon, Daniele and Alippi, Cesare},
  booktitle = {ICML 2024 Workshops: SPIGM},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/manenti2024icmlw-learning/}
}