Homomorphism Counts as Structural Encodings for Molecular Property Prediction
Abstract
Graph transformers are popular neural networks that extend the well-known transformer architecture to the graph domain. These architectures operate by applying self-attention on graph nodes and incorporating graph structure through the use of positional encodings (e.g., Laplacian positional encoding) or structural encodings (e.g., random-walk structural encoding). The quality of such encodings is critical, since they provide the necessary \emph{graph inductive biases} to condition the model on graph structure. In this work, we propose \emph{motif structural encoding} (\emph{MoSE}) as a flexible and powerful structural encoding framework based on counting graph homomorphisms. Theoretically, we compare the expressive power of MoSE to random-walk structural encoding and relate both encodings to the expressive power of standard message passing neural networks. Empirically, we observe that MoSE outperforms other well-known positional and structural encodings across a range of architectures, and it achieves state-of-the-art performance on widely studied molecular property prediction datasets.
Cite
Text
Bao et al. "Homomorphism Counts as Structural Encodings for Molecular Property Prediction." NeurIPS 2024 Workshops: AIDrugX, 2024.Markdown
[Bao et al. "Homomorphism Counts as Structural Encodings for Molecular Property Prediction." NeurIPS 2024 Workshops: AIDrugX, 2024.](https://mlanthology.org/neuripsw/2024/bao2024neuripsw-homomorphism/)BibTeX
@inproceedings{bao2024neuripsw-homomorphism,
title = {{Homomorphism Counts as Structural Encodings for Molecular Property Prediction}},
author = {Bao, Linus and Jin, Emily and Bronstein, Michael M. and Ceylan, Ismail Ilkan and Lanzinger, Matthias},
booktitle = {NeurIPS 2024 Workshops: AIDrugX},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/bao2024neuripsw-homomorphism/}
}