TRAM: Global Trajectory and Motion of 3D Humans from In-the-Wild Videos

Abstract

We propose TRAM, a two-stage method to reconstruct a human’s global trajectory and motion from in-the-wild videos. TRAM robustifies SLAM to recover the camera motion in the presence of dynamic humans and uses the scene background to derive the motion scale. Using the recovered camera as a metric-scale reference frame, we introduce a video transformer model (VIMO) to regress the kinematic body motion of a human. By composing the two motions, we achieve accurate recovery of 3D humans in the world space, reducing global motion errors by a large margin from prior work. https://yufu-wang.github. io/tram4d/

Cite

Text

Wang et al. "TRAM: Global Trajectory and Motion of 3D Humans from In-the-Wild Videos." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73247-8_27

Markdown

[Wang et al. "TRAM: Global Trajectory and Motion of 3D Humans from In-the-Wild Videos." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/wang2024eccv-tram/) doi:10.1007/978-3-031-73247-8_27

BibTeX

@inproceedings{wang2024eccv-tram,
  title     = {{TRAM: Global Trajectory and Motion of 3D Humans from In-the-Wild Videos}},
  author    = {Wang, Yufu and Wang, Ziyun and Liu, Lingjie and Daniilidis, Kostas},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-73247-8_27},
  url       = {https://mlanthology.org/eccv/2024/wang2024eccv-tram/}
}