MLPHand: Real Time Multi-View 3D Hand Reconstruction via MLP Modeling

Abstract

Multi-view hand reconstruction is a critical task for applications in virtual reality and human-computer interaction, but it remains a formidable challenge. Although existing multi-view hand reconstruction methods achieve remarkable accuracy, they typically come with an intensive computational burden that hinders real-time inference. To this end, we propose MLPHand, a novel method designed for real-time multi-view single hand reconstruction. MLPHand consists of two primary modules: (1) a lightweight MLP-based Skeleton2Mesh model that efficiently recovers hand meshes from hand skeletons, and (2) a multi-view geometry feature fusion prediction module that enhances the Skeleton2Mesh model with detailed geometric information from multiple views. Experiments on three widely used datasets demonstrate that MLPHand can reduce computational complexity by 90% while achieving comparable reconstruction accuracy to existing state-of-the-art baselines. Project link is https://github.com/jackyyang9/ MLPHand

Cite

Text

Yang et al. "MLPHand: Real Time Multi-View 3D Hand Reconstruction via MLP Modeling." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72904-1_24

Markdown

[Yang et al. "MLPHand: Real Time Multi-View 3D Hand Reconstruction via MLP Modeling." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/yang2024eccv-mlphand/) doi:10.1007/978-3-031-72904-1_24

BibTeX

@inproceedings{yang2024eccv-mlphand,
  title     = {{MLPHand: Real Time Multi-View 3D Hand Reconstruction via MLP Modeling}},
  author    = {Yang, Jian and Li, Jiakun and Li, Guoming and Wu, Huaiyu and Shen, Zhen and Fan, Zhaoxin},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-72904-1_24},
  url       = {https://mlanthology.org/eccv/2024/yang2024eccv-mlphand/}
}