Staying in Shape: Learning Invariant Shape Representations Using Contrastive Learning

Abstract

Creating representations of shapes that are invariant to isometric or almost-isometric transformations has long been an area of interest in shape analysis, since enforcing invariance allows the learning of more effective and robust shape representations. Most existing invariant shape representations are handcrafted, and previous work on learning shape representations do not focus on producing invariant representations. To solve the problem of learning unsupervised invariant shape representations, we use contrastive learning, which produces discriminative representations through learning invariance to user-specified data augmentations. To produce representations that are specifically isometry and almost-isometry invariant, we propose new data augmentations that randomly sample these transformations. We show experimentally that our method outperforms previous unsupervised learning approaches in both effectiveness and robustness.

Cite

Text

Gu and Yeung. "Staying in Shape: Learning Invariant Shape Representations Using Contrastive Learning." Uncertainty in Artificial Intelligence, 2021.

Markdown

[Gu and Yeung. "Staying in Shape: Learning Invariant Shape Representations Using Contrastive Learning." Uncertainty in Artificial Intelligence, 2021.](https://mlanthology.org/uai/2021/gu2021uai-staying/)

BibTeX

@inproceedings{gu2021uai-staying,
  title     = {{Staying in Shape: Learning Invariant Shape Representations Using Contrastive Learning}},
  author    = {Gu, Jeffrey and Yeung, Serena},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2021},
  pages     = {1852-1862},
  volume    = {161},
  url       = {https://mlanthology.org/uai/2021/gu2021uai-staying/}
}