Disentangled and Self-Explainable Node Representation Learning

Abstract

Node embeddings are low-dimensional vectors that capture node properties, typically learned through unsupervised structural similarity objectives or supervised tasks. While recent efforts have focused on post-hoc explanations for graph models, intrinsic interpretability in unsupervised node embeddings remains largely underexplored. To bridge this gap, we introduce DiSeNE (Disentangled and Self-Explainable Node Embedding), a framework that learns self-explainable node representations in an unsupervised fashion. By leveraging disentangled representation learning, DiSeNE ensures that each embedding dimension corresponds to a distinct topological substructure of the graph, thus offering clear, dimension-wise interpretability. We introduce new objective functions grounded in principled desiderata, jointly optimizing for structural fidelity, disentanglement, and human interpretability. Additionally, we propose several new metrics to evaluate representation quality and human interpretability. Extensive experiments on multiple benchmark datasets demonstrate that DiSeNE not only preserves the underlying graph structure but also provides transparent, human-understandable explanations for each embedding dimension.

Cite

Text

Piaggesi et al. "Disentangled and Self-Explainable Node Representation Learning." Transactions on Machine Learning Research, 2025.

Markdown

[Piaggesi et al. "Disentangled and Self-Explainable Node Representation Learning." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/piaggesi2025tmlr-disentangled/)

BibTeX

@article{piaggesi2025tmlr-disentangled,
  title     = {{Disentangled and Self-Explainable Node Representation Learning}},
  author    = {Piaggesi, Simone and Panisson, André and Khosla, Megha},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/piaggesi2025tmlr-disentangled/}
}