Recursive Algorithmic Reasoning
Abstract
Learning models that execute algorithms can enable us to address a key problem in deep learning: generalizing to out-of-distribution data. However, neural networks are currently unable to execute recursive algorithms because they do not have arbitrarily large memory to store and recall state. To address this, we (1) propose a way to augment graph neural networks (GNNs) with a stack, and (2) develop an approach for sampling intermediate algorithm trajectories that improves alignment with recursive algorithms over previous methods. The stack allows the network to learn to store and recall a portion of the state of the network at a particular time, analogous to the action of a call stack in a recursive algorithm. This augmentation permits the network to reason recursively. We empirically demonstrate that our proposals significantly improve generalization to larger input graphs over prior work on depth-first search (DFS).
Cite
Text
Jürß et al. "Recursive Algorithmic Reasoning." Proceedings of the Second Learning on Graphs Conference, 2023.Markdown
[Jürß et al. "Recursive Algorithmic Reasoning." Proceedings of the Second Learning on Graphs Conference, 2023.](https://mlanthology.org/log/2023/jur2023log-recursive/)BibTeX
@inproceedings{jur2023log-recursive,
title = {{Recursive Algorithmic Reasoning}},
author = {Jürß, Jonas and Jayalath, Dulhan Hansaja and Veličković, Petar},
booktitle = {Proceedings of the Second Learning on Graphs Conference},
year = {2023},
pages = {5:1-5:14},
volume = {231},
url = {https://mlanthology.org/log/2023/jur2023log-recursive/}
}