Self-Supervised Representation Learning on Manifolds
Abstract
We explore the use of a topological manifold, represented as a collection of charts, as the target space of neural network based representation learning tasks. This is achieved by a simple adjustment to the output of an encoder's network architecture plus the addition of a maximal mean discrepancy based loss function for regularization. Most algorithms in representation learning are easily adaptable to our framework and we demonstrate its effectiveness by adjusting SimCLR to have a manifold encoding space. Our experiments show that we obtain a substantial performance boost over the baseline for low dimensional encodings. Code for reproducing experiments is provided at https://github.com/ekorman/neurve.
Cite
Text
Korman. "Self-Supervised Representation Learning on Manifolds." ICLR 2021 Workshops: GTRL, 2021.Markdown
[Korman. "Self-Supervised Representation Learning on Manifolds." ICLR 2021 Workshops: GTRL, 2021.](https://mlanthology.org/iclrw/2021/korman2021iclrw-selfsupervised/)BibTeX
@inproceedings{korman2021iclrw-selfsupervised,
title = {{Self-Supervised Representation Learning on Manifolds}},
author = {Korman, Eric O},
booktitle = {ICLR 2021 Workshops: GTRL},
year = {2021},
url = {https://mlanthology.org/iclrw/2021/korman2021iclrw-selfsupervised/}
}