Any6D: Model-Free 6d Pose Estimation of Novel Objects
Abstract
We introduce Any6D, a model-free framework for 6D object pose estimation that requires only a single RGB-D anchor image to estimate both the 6D pose and size of unknown objects in novel scenes. Unlike existing methods that rely on textured 3D models or multiple viewpoints, Any6D leverages a joint object alignment process to enhance 2D-3D alignment and metric scale estimation for improved pose accuracy. Our approach integrates a render-and-compare strategy to generate and refine pose hypotheses, enabling robust performance in scenarios with occlusions, non-overlapping views, diverse lighting conditions, and large cross-environment variations. We evaluate our method on five challenging datasets: REAL275, Toyota-Light, HO3D, YCBINEOAT, and LM-O, demonstrating its effectiveness in significantly outperforming state-of-the-art methods for novel object pose estimation. Project page: https://taeyeop.com/any6d
Cite
Text
Lee et al. "Any6D: Model-Free 6d Pose Estimation of Novel Objects." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.01086Markdown
[Lee et al. "Any6D: Model-Free 6d Pose Estimation of Novel Objects." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/lee2025cvpr-any6d/) doi:10.1109/CVPR52734.2025.01086BibTeX
@inproceedings{lee2025cvpr-any6d,
title = {{Any6D: Model-Free 6d Pose Estimation of Novel Objects}},
author = {Lee, Taeyeop and Wen, Bowen and Kang, Minjun and Kang, Gyuree and Kweon, In So and Yoon, Kuk-Jin},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2025},
pages = {11633-11643},
doi = {10.1109/CVPR52734.2025.01086},
url = {https://mlanthology.org/cvpr/2025/lee2025cvpr-any6d/}
}