Latent Space Representations of Neural Algorithmic Reasoners

Abstract

Neural Algorithmic Reasoning (NAR) is a research area focused on designing neural architectures that can reliably capture classical computation, usually by learning to execute algorithms. A typical approach is to rely on Graph Neural Network (GNN) architectures, which encode inputs in high-dimensional latent spaces that are repeatedly transformed during the execution of the algorithm. In this work we perform a detailed analysis of the structure of the latent space induced by the GNN when executing algorithms. We identify two possible failure modes: (i) loss of resolution, making it hard to distinguish similar values; (ii) inability to deal with values outside the range observed during training. We propose to solve the first issue by relying on a softmax aggregator, and propose to decay the latent space in order to deal with out-of-range values. We show that these changes lead to improvements on the majority of algorithms in the standard CLRS-30 benchmark when using the state-of-the-art Triplet-GMPNN processor.

Cite

Text

Mirjanic et al. "Latent Space Representations of Neural Algorithmic Reasoners." Proceedings of the Second Learning on Graphs Conference, 2023.

Markdown

[Mirjanic et al. "Latent Space Representations of Neural Algorithmic Reasoners." Proceedings of the Second Learning on Graphs Conference, 2023.](https://mlanthology.org/log/2023/mirjanic2023log-latent/)

BibTeX

@inproceedings{mirjanic2023log-latent,
  title     = {{Latent Space Representations of Neural Algorithmic Reasoners}},
  author    = {Mirjanic, Vladimir V and Pascanu, Razvan and Veličković, Petar},
  booktitle = {Proceedings of the Second Learning on Graphs Conference},
  year      = {2023},
  pages     = {10:1-10:24},
  volume    = {231},
  url       = {https://mlanthology.org/log/2023/mirjanic2023log-latent/}
}