Hand2Any: Hand-to-Any Motion Mapping with Few-Shot User Adaptation for Avatar Manipulation
Abstract
We explore user-agnostic and user-specific mapping techniques for manipulating avatars using hand gestures in virtual reality (VR). User-agnostic mapping allows users to control various avatars based on common user agreements, while user-specific mapping adapts to individual preferences using few-shot adaptation. Both approaches use supervised learning with paired datasets of motion data. Our evaluation, including quantitative assessments and user studies, shows that both techniques offer finer control, require less physical effort, and provide higher user satisfaction compared to existing methods. However, only a few users preferred user-specific mapping, indicating that the benefits of personalized mapping may vary. Finally, we demonstrate our methods’ ability to manipulate avatars with variable joint structures, surpassing current methods.
Cite
Text
Shinohara et al. "Hand2Any: Hand-to-Any Motion Mapping with Few-Shot User Adaptation for Avatar Manipulation." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-92387-6_28Markdown
[Shinohara et al. "Hand2Any: Hand-to-Any Motion Mapping with Few-Shot User Adaptation for Avatar Manipulation." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/shinohara2024eccvw-hand2any/) doi:10.1007/978-3-031-92387-6_28BibTeX
@inproceedings{shinohara2024eccvw-hand2any,
title = {{Hand2Any: Hand-to-Any Motion Mapping with Few-Shot User Adaptation for Avatar Manipulation}},
author = {Shinohara, Riku and Hashimoto, Atsushi and Kozuno, Tadashi and Yoshida, Shigeo and Hirao, Yutaro and Perusquía-Hernández, Monica and Uchiyama, Hideaki and Kiyokawa, Kiyoshi},
booktitle = {European Conference on Computer Vision Workshops},
year = {2024},
pages = {407-423},
doi = {10.1007/978-3-031-92387-6_28},
url = {https://mlanthology.org/eccvw/2024/shinohara2024eccvw-hand2any/}
}