Node Embeddings via Neighbor Embeddings
Abstract
Node embeddings are a paradigm in non-parametric graph representation learning, where graph nodes are embedded into a given vector space to enable downstream processing. State-of-the-art node-embedding algorithms, such as DeepWalk and node2vec, are based on random-walk notions of node similarity and on contrastive learning. In this work, we introduce the graph neighbor-embedding (graph NE) framework that directly pulls together embedding vectors of adjacent nodes without relying on any random walks. We show that graph NE strongly outperforms state-of-the-art node-embedding algorithms in terms of local structure preservation. Furthermore, we apply graph NE to the 2D node-embedding problem, obtaining graph t-SNE layouts that also outperform existing graph-layout algorithms.
Cite
Text
Böhm et al. "Node Embeddings via Neighbor Embeddings." Transactions on Machine Learning Research, 2025.Markdown
[Böhm et al. "Node Embeddings via Neighbor Embeddings." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/bohm2025tmlr-node/)BibTeX
@article{bohm2025tmlr-node,
title = {{Node Embeddings via Neighbor Embeddings}},
author = {Böhm, Jan Niklas and Keute, Marius and Guzmán, Alica and Damrich, Sebastian and Draganov, Andrew and Kobak, Dmitry},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/bohm2025tmlr-node/}
}