PACE: Pose Annotations in Cluttered Environments
Abstract
We introduce PACE (Pose Annotations in Cluttered Environments), a large-scale benchmark designed to advance the development and evaluation of pose estimation methods in cluttered scenarios. PACE provides a large-scale real-world benchmark for both instance-level and category-level settings. The benchmark consists of 55K frames with 258K annotations across 300 videos, covering 238 objects from 43 categories and featuring a mix of rigid and articulated items in cluttered scenes. To annotate the real-world data efficiently, we develop an innovative annotation system with a calibrated 3-camera setup. Additionally, we offer PACE-Sim, which contains 100K photo-realistic simulated frames with 2.4M annotations across 931 objects. We test state-of-the-art algorithms in PACE along two tracks: pose estimation, and object pose tracking, revealing the benchmark’s challenges and research opportunities. Our benchmark code and data is available on https://github.com/qq456cvb/PACE.
Cite
Text
You et al. "PACE: Pose Annotations in Cluttered Environments." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72983-6_27Markdown
[You et al. "PACE: Pose Annotations in Cluttered Environments." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/you2024eccv-pace/) doi:10.1007/978-3-031-72983-6_27BibTeX
@inproceedings{you2024eccv-pace,
title = {{PACE: Pose Annotations in Cluttered Environments}},
author = {You, Yang and Xiong, Kai and Yang, Zhening and Huang, Zhengxiang and Zhou, Junwei and Shi, Ruoxi and Fang, Zhou and Harley, Adam and Guibas, Leonidas and Lu, Cewu},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72983-6_27},
url = {https://mlanthology.org/eccv/2024/you2024eccv-pace/}
}