Probabilistic Task-Adaptive Graph Rewiring

Abstract

Message-passing graph neural networks (MPNNs) emerged as powerful tools for processing graph-structured input. However, they operate on a fixed graph structure, ignoring potential noise and missing information. In addition, due to their purely local aggregation mechanism, they are susceptible to phenomena such as over-smoothing, over-squashing, or under-reaching. Hence, devising principled approaches for learning to focus on graph structure relevant to the given prediction task remains an open challenge. In this work, leveraging recent progress in differentiable $k$-subset sampling, we devise a novel task-adaptive graph rewiring approach, which learns to add relevant edges while omitting less beneficial ones. We empirically demonstrate on synthetic datasets that our approach effectively alleviates the issues of over-squashing and under-reaching. In addition, on established real-world datasets, we demonstrate that our method is competitive or superior to conventional MPNN models and graph transformer architectures regarding predictive performance and computational~efficiency.

Cite

Text

Qian et al. "Probabilistic Task-Adaptive Graph Rewiring." ICML 2023 Workshops: Differentiable_Almost_Everything, 2023.

Markdown

[Qian et al. "Probabilistic Task-Adaptive Graph Rewiring." ICML 2023 Workshops: Differentiable_Almost_Everything, 2023.](https://mlanthology.org/icmlw/2023/qian2023icmlw-probabilistic/)

BibTeX

@inproceedings{qian2023icmlw-probabilistic,
  title     = {{Probabilistic Task-Adaptive Graph Rewiring}},
  author    = {Qian, Chendi and Manolache, Andrei and Ahmed, Kareem and Zeng, Zhe and Van den Broeck, Guy and Niepert, Mathias and Morris, Christopher},
  booktitle = {ICML 2023 Workshops: Differentiable_Almost_Everything},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/qian2023icmlw-probabilistic/}
}