TopoSRL: Topology Preserving Self-Supervised Simplicial Representation Learning

Abstract

In this paper, we introduce $\texttt{TopoSRL}$, a novel self-supervised learning (SSL) method for simplicial complexes to effectively capture higher-order interactions and preserve topology in the learned representations. $\texttt{TopoSRL}$ addresses the limitations of existing graph-based SSL methods that typically concentrate on pairwise relationships, neglecting long-range dependencies crucial to capture topological information. We propose a new simplicial augmentation technique that generates two views of the simplicial complex that enriches the representations while being efficient. Next, we propose a new simplicial contrastive loss function that contrasts the generated simplices to preserve local and global information present in the simplicial complexes. Extensive experimental results demonstrate the superior performance of $\texttt{TopoSRL}$ compared to state-of-the-art graph SSL techniques and supervised simplicial neural models across various datasets corroborating the efficacy of $\texttt{TopoSRL}$ in processing simplicial complex data in a self-supervised setting.

Cite

Text

Madhu and Chepuri. "TopoSRL: Topology Preserving Self-Supervised Simplicial Representation Learning." Neural Information Processing Systems, 2023.

Markdown

[Madhu and Chepuri. "TopoSRL: Topology Preserving Self-Supervised Simplicial Representation Learning." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/madhu2023neurips-toposrl/)

BibTeX

@inproceedings{madhu2023neurips-toposrl,
  title     = {{TopoSRL: Topology Preserving Self-Supervised Simplicial Representation Learning}},
  author    = {Madhu, Hiren and Chepuri, Sundeep Prabhakar},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/madhu2023neurips-toposrl/}
}