MagicPony: Learning Articulated 3D Animals in the Wild

Abstract

We consider the problem of predicting the 3D shape, articulation, viewpoint, texture, and lighting of an articulated animal like a horse given a single test image as input. We present a new method, dubbed MagicPony, that learns this predictor purely from in-the-wild single-view images of the object category, with minimal assumptions about the topology of deformation. At its core is an implicit-explicit representation of articulated shape and appearance, combining the strengths of neural fields and meshes. In order to help the model understand an object's shape and pose, we distil the knowledge captured by an off-the-shelf self-supervised vision transformer and fuse it into the 3D model. To overcome local optima in viewpoint estimation, we further introduce a new viewpoint sampling scheme that comes at no additional training cost. MagicPony outperforms prior work on this challenging task and demonstrates excellent generalisation in reconstructing art, despite the fact that it is only trained on real images. The code can be found on the project page at https://3dmagicpony.github.io/.

Cite

Text

Wu et al. "MagicPony: Learning Articulated 3D Animals in the Wild." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00849

Markdown

[Wu et al. "MagicPony: Learning Articulated 3D Animals in the Wild." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/wu2023cvpr-magicpony/) doi:10.1109/CVPR52729.2023.00849

BibTeX

@inproceedings{wu2023cvpr-magicpony,
  title     = {{MagicPony: Learning Articulated 3D Animals in the Wild}},
  author    = {Wu, Shangzhe and Li, Ruining and Jakab, Tomas and Rupprecht, Christian and Vedaldi, Andrea},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {8792-8802},
  doi       = {10.1109/CVPR52729.2023.00849},
  url       = {https://mlanthology.org/cvpr/2023/wu2023cvpr-magicpony/}
}