Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks
Abstract
The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce port-Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.
Cite
Text
Heilig et al. "Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks." International Conference on Learning Representations, 2025.Markdown
[Heilig et al. "Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/heilig2025iclr-porthamiltonian/)BibTeX
@inproceedings{heilig2025iclr-porthamiltonian,
title = {{Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks}},
author = {Heilig, Simon and Gravina, Alessio and Trenta, Alessandro and Gallicchio, Claudio and Bacciu, Davide},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/heilig2025iclr-porthamiltonian/}
}