UniTac2Pose: A Unified Approach Learned in Simulation for Category-Level Visuotactile In-Hand Pose Estimation
Abstract
Accurate estimation of the in-hand pose of an object based on its CAD model is crucial in both industrial applications and everyday tasks—ranging from positioning workpieces and assembling components to seamlessly inserting devices like USB connectors. While existing methods often rely on regression, feature matching, or registration techniques, achieving high precision and generalizability to unseen CAD models remains a significant challenge. In this paper, we propose a novel three-stage framework for in-hand pose estimation. The first stage involves sampling and pre-ranking pose candidates, followed by iterative refinement of these candidates in the second stage. In the final stage, post-ranking is applied to identify the most likely pose candidates. These stages are governed by a unified energy-based diffusion model, which is trained solely on simulated data. This energy model simultaneously generates gradients to refine pose estimates and produces an energy scalar that quantifies the quality of the pose estimates. Additionally, inspired by the computer vision domain, we incorporate a render-compare architecture within the energy-based score network to significantly enhance sim-to-real performance, as demonstrated by our ablation studies. Extensive experimental evaluations show that our method outperforms conventional baselines based on regression, matching, and registration techniques, while also exhibiting strong generalization to previously unseen CAD models. Moreover, our approach integrates tactile object pose estimation, pose tracking, and uncertainty estimation into a unified system, enabling robust performance across a variety of real-world conditions.
Cite
Text
Wu et al. "UniTac2Pose: A Unified Approach Learned in Simulation for Category-Level Visuotactile In-Hand Pose Estimation." Proceedings of The 9th Conference on Robot Learning, 2025.Markdown
[Wu et al. "UniTac2Pose: A Unified Approach Learned in Simulation for Category-Level Visuotactile In-Hand Pose Estimation." Proceedings of The 9th Conference on Robot Learning, 2025.](https://mlanthology.org/corl/2025/wu2025corl-unitac2pose/)BibTeX
@inproceedings{wu2025corl-unitac2pose,
title = {{UniTac2Pose: A Unified Approach Learned in Simulation for Category-Level Visuotactile In-Hand Pose Estimation}},
author = {Wu, Mingdong and Yang, Long and Liu, Jin and Huang, Weiyao and Wu, Lehong and Chen, Zelin and Ma, Daolin and Dong, Hao},
booktitle = {Proceedings of The 9th Conference on Robot Learning},
year = {2025},
pages = {4367-4384},
volume = {305},
url = {https://mlanthology.org/corl/2025/wu2025corl-unitac2pose/}
}