Connecting Neural Models Latent Geometries with Relative Geodesic Representations

Abstract

Neural models learn representations of high dimensional data that lie on low dimensional manifolds. Multiple factors, including stochasticities in the training process, may induce different representations, even when learning the same task on the same data. However, when there exist a latent structure shared between different representational spaces, it has been showed that is possible to model a transformation between them. In this work, we show how by leveraging the differential geometrical structure of latent spaces of neural models, it is possible to capture precisely the transformations between distinct latent spaces. We validate experimentally our method on autoencoder models and real pretrained foundational vision models across diverse architectures, initializations and tasks.

Cite

Text

Yu et al. "Connecting Neural Models Latent Geometries with Relative Geodesic Representations." NeurIPS 2024 Workshops: UniReps, 2024.

Markdown

[Yu et al. "Connecting Neural Models Latent Geometries with Relative Geodesic Representations." NeurIPS 2024 Workshops: UniReps, 2024.](https://mlanthology.org/neuripsw/2024/yu2024neuripsw-connecting-a/)

BibTeX

@inproceedings{yu2024neuripsw-connecting-a,
  title     = {{Connecting Neural Models Latent Geometries with Relative Geodesic Representations}},
  author    = {Yu, Hanlin and Inal, Berfin and Fumero, Marco},
  booktitle = {NeurIPS 2024 Workshops: UniReps},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/yu2024neuripsw-connecting-a/}
}