Neural Execution of Graph Algorithms

Abstract

Graph Neural Networks (GNNs) are a powerful representational tool for solving problems on graph-structured inputs. In almost all cases so far, however, they have been applied to directly recovering a final solution from raw inputs, without explicit guidance on how to structure their problem-solving. Here, instead, we focus on learning in the space of algorithms: we train several state-of-the-art GNN architectures to imitate individual steps of classical graph algorithms, parallel (breadth-first search, Bellman-Ford) as well as sequential (Prim's algorithm). As graph algorithms usually rely on making discrete decisions within neighbourhoods, we hypothesise that maximisation-based message passing neural networks are best-suited for such objectives, and validate this claim empirically. We also demonstrate how learning in the space of algorithms can yield new opportunities for positive transfer between tasks---showing how learning a shortest-path algorithm can be substantially improved when simultaneously learning a reachability algorithm.

Cite

Text

Veličković et al. "Neural Execution of Graph Algorithms." International Conference on Learning Representations, 2020.

Markdown

[Veličković et al. "Neural Execution of Graph Algorithms." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/velickovic2020iclr-neural/)

BibTeX

@inproceedings{velickovic2020iclr-neural,
  title     = {{Neural Execution of Graph Algorithms}},
  author    = {Veličković, Petar and Ying, Rex and Padovano, Matilde and Hadsell, Raia and Blundell, Charles},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/velickovic2020iclr-neural/}
}