Range-Aware Positional Encoding via High-Order Pretraining: Theory and Practice
Abstract
Based on Wavelet Positional Encoding of Ngo et.al., we propose $\textbf{HOPE-WavePE}$ ($\textbf{H}$igh-$\textbf{O}$rder $\textbf{P}$ermutation $\textbf{E}$quivariant $\textbf{Wave}$let $\textbf{P}$ositional $\textbf{E}$ncoding) a novel pre-training strategy for positional encoding that is equivariant under the permutation group and is sensitive to the length and diameter of graphs downstream tasks. Since our approach relies solely on the graph structure, it is domain-agnostic and adaptable to datasets from various domains, therefore paving the wave for developing general graph structure encoders and graph foundation models. We theoretically demonstrate that such equivariant pretraining schema can approximate the training target for abitrarily small tolerance. We also evaluate HOPE-WavePE on graph-level prediction tasks of different areas and show its superiority compared to other methods. We release our source code upon the acceptance.
Cite
Text
Nguyen et al. "Range-Aware Positional Encoding via High-Order Pretraining: Theory and Practice." NeurIPS 2024 Workshops: NeurReps, 2024.Markdown
[Nguyen et al. "Range-Aware Positional Encoding via High-Order Pretraining: Theory and Practice." NeurIPS 2024 Workshops: NeurReps, 2024.](https://mlanthology.org/neuripsw/2024/nguyen2024neuripsw-rangeaware/)BibTeX
@inproceedings{nguyen2024neuripsw-rangeaware,
title = {{Range-Aware Positional Encoding via High-Order Pretraining: Theory and Practice}},
author = {Nguyen, Viet Anh and Ngo, Nhat Khang and Son, Hy Truong},
booktitle = {NeurIPS 2024 Workshops: NeurReps},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/nguyen2024neuripsw-rangeaware/}
}