Graph Geometry-Preserving Autoencoders

Abstract

When using an autoencoder to learn the low-dimensional manifold of high-dimensional data, it is crucial to find the latent representations that preserve the geometry of the data manifold. However, most existing studies assume a Euclidean nature for the high-dimensional data space, which is arbitrary and often does not precisely reflect the underlying semantic or domain-specific attributes of the data. In this paper, we propose a novel autoencoder regularization framework based on the premise that the geometry of the data manifold can often be better captured with a well-designed similarity graph associated with data points. Given such a graph, we utilize a Riemannian geometric distortion measure as a regularizer to preserve the geometry derived from the graph Laplacian and make it suitable for larger-scale autoencoder training. Through extensive experiments, we show that our method outperforms existing state-of-the-art geometry-preserving and graph-based autoencoders with respect to learning accurate latent structures that preserve the graph geometry, and is particularly effective in learning dynamics in the latent space. Code is available at https://github.com/JungbinLim/GGAE-public.

Cite

Text

Lim et al. "Graph Geometry-Preserving Autoencoders." International Conference on Machine Learning, 2024.

Markdown

[Lim et al. "Graph Geometry-Preserving Autoencoders." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/lim2024icml-graph/)

BibTeX

@inproceedings{lim2024icml-graph,
  title     = {{Graph Geometry-Preserving Autoencoders}},
  author    = {Lim, Jungbin and Kim, Jihwan and Lee, Yonghyeon and Jang, Cheongjae and Park, Frank C.},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {29795-29815},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/lim2024icml-graph/}
}