Attending to Graph Transformers
Abstract
Recently, transformer architectures for graphs emerged as an alternative to established techniques for machine learning with graphs, such as (message-passing) graph neural networks. So far, they have shown promising empirical results, e.g., on molecular prediction datasets, often attributed to their ability to circumvent graph neural networks’ shortcomings, such as over-smoothing and over-squashing. Here, we derive a taxonomy of graph transformer architectures, bringing some order to this emerging field. We overview their theoretical properties, survey structural and positional encodings, and discuss extensions for important graph classes, e.g., 3D molecular graphs. Empirically, we probe how well graph transformers can recover various graph properties, how well they can deal with heterophilic graphs, and to what extent they prevent over-squashing. Further, we outline open challenges and research direction to stimulate future work
Cite
Text
Müller et al. "Attending to Graph Transformers." Transactions on Machine Learning Research, 2024.Markdown
[Müller et al. "Attending to Graph Transformers." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/muller2024tmlr-attending/)BibTeX
@article{muller2024tmlr-attending,
title = {{Attending to Graph Transformers}},
author = {Müller, Luis and Galkin, Mikhail and Morris, Christopher and Rampášek, Ladislav},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/muller2024tmlr-attending/}
}