Topological Autoencoders
Abstract
We propose a novel approach for preserving topological structures of the input space in latent representations of autoencoders. Using persistent homology, a technique from topological data analysis, we calculate topological signatures of both the input and latent space to derive a topological loss term. Under weak theoretical assumptions, we construct this loss in a differentiable manner, such that the encoding learns to retain multi-scale connectivity information. We show that our approach is theoretically well-founded and that it exhibits favourable latent representations on a synthetic manifold as well as on real-world image data sets, while preserving low reconstruction errors.
Cite
Text
Moor et al. "Topological Autoencoders." International Conference on Machine Learning, 2020.Markdown
[Moor et al. "Topological Autoencoders." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/moor2020icml-topological/)BibTeX
@inproceedings{moor2020icml-topological,
title = {{Topological Autoencoders}},
author = {Moor, Michael and Horn, Max and Rieck, Bastian and Borgwardt, Karsten},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {7045-7054},
volume = {119},
url = {https://mlanthology.org/icml/2020/moor2020icml-topological/}
}