DexYCB: A Benchmark for Capturing Hand Grasping of Objects
Abstract
We introduce DexYCB, a new dataset for capturing hand grasping of objects. We first compare DexYCB with a related one through cross-dataset evaluation. We then present a thorough benchmark of state-of-the-art approaches on three relevant tasks: 2D object and keypoint detection, 6D object pose estimation, and 3D hand pose estimation. Finally, we evaluate a new robotics-relevant task: generating safe robot grasps in human-to-robot object handover.
Cite
Text
Chao et al. "DexYCB: A Benchmark for Capturing Hand Grasping of Objects." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00893Markdown
[Chao et al. "DexYCB: A Benchmark for Capturing Hand Grasping of Objects." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/chao2021cvpr-dexycb/) doi:10.1109/CVPR46437.2021.00893BibTeX
@inproceedings{chao2021cvpr-dexycb,
title = {{DexYCB: A Benchmark for Capturing Hand Grasping of Objects}},
author = {Chao, Yu-Wei and Yang, Wei and Xiang, Yu and Molchanov, Pavlo and Handa, Ankur and Tremblay, Jonathan and Narang, Yashraj S. and Van Wyk, Karl and Iqbal, Umar and Birchfield, Stan and Kautz, Jan and Fox, Dieter},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2021},
pages = {9044-9053},
doi = {10.1109/CVPR46437.2021.00893},
url = {https://mlanthology.org/cvpr/2021/chao2021cvpr-dexycb/}
}