Self-Supervised Category-Level Articulated Object Pose Estimation with Part-Level SE(3) Equivariance
Abstract
Category-level articulated object pose estimation aims to estimate a hierarchy of articulation-aware object poses of an unseen articulated object from a known category. To reduce the heavy annotations needed for supervised learning methods, we present a novel self-supervised strategy that solves this problem without any human labels. Our key idea is to factorize canonical shapes and articulated object poses from input articulated shapes through part-level equivariant shape analysis. Specifically, we first introduce the concept of part-level SE(3) equivariance and devise a network to learn features of such property. Then, through a carefully designed fine-grained pose-shape disentanglement strategy, we expect that canonical spaces to support pose estimation could be induced automatically. Thus, we could further predict articulated object poses as per-part rigid transformations describing how parts transform from their canonical part spaces to the camera space. Extensive experiments demonstrate the effectiveness of our method on both complete and partial point clouds from synthetic and real articulated object datasets.
Cite
Text
Liu et al. "Self-Supervised Category-Level Articulated Object Pose Estimation with Part-Level SE(3) Equivariance." International Conference on Learning Representations, 2023.Markdown
[Liu et al. "Self-Supervised Category-Level Articulated Object Pose Estimation with Part-Level SE(3) Equivariance." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/liu2023iclr-selfsupervised/)BibTeX
@inproceedings{liu2023iclr-selfsupervised,
title = {{Self-Supervised Category-Level Articulated Object Pose Estimation with Part-Level SE(3) Equivariance}},
author = {Liu, Xueyi and Zhang, Ji and Hu, Ruizhen and Huang, Haibin and Wang, He and Yi, Li},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/liu2023iclr-selfsupervised/}
}