Canonical Capsules: Self-Supervised Capsules in Canonical Pose
Abstract
We propose a self-supervised capsule architecture for 3D point clouds. We compute capsule decompositions of objects through permutation-equivariant attention, and self-supervise the process by training with pairs of randomly rotated objects. Our key idea is to aggregate the attention masks into semantic keypoints, and use these to supervise a decomposition that satisfies the capsule invariance/equivariance properties. This not only enables the training of a semantically consistent decomposition, but also allows us to learn a canonicalization operation that enables object-centric reasoning. To train our neural network we require neither classification labels nor manually-aligned training datasets. Yet, by learning an object-centric representation in a self-supervised manner, our method outperforms the state-of-the-art on 3D point cloud reconstruction, canonicalization, and unsupervised classification.
Cite
Text
Sun et al. "Canonical Capsules: Self-Supervised Capsules in Canonical Pose." Neural Information Processing Systems, 2021.Markdown
[Sun et al. "Canonical Capsules: Self-Supervised Capsules in Canonical Pose." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/sun2021neurips-canonical/)BibTeX
@inproceedings{sun2021neurips-canonical,
title = {{Canonical Capsules: Self-Supervised Capsules in Canonical Pose}},
author = {Sun, Weiwei and Tagliasacchi, Andrea and Deng, Boyang and Sabour, Sara and Yazdani, Soroosh and Hinton, Geoffrey E. and Yi, Kwang Moo},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/sun2021neurips-canonical/}
}