Pose-Transformed Equivariant Network for 3D Point Trajectory Prediction

Abstract

Predicting 3D point trajectory is a fundamental learning task which commonly should be equivariant under Euclidean transformation e.g. SE(3). The existing equivariant models are commonly based on the group equivariant convolution equivariant message passing vector neuron frame averaging etc. In this paper we propose a novel pose-transformed equivariant network in which the points are firstly uniquely normalized and then transformed by the learned pose transformations upon which the points after motion are predicted and aggregated. Under each transformed pose we design the point position predictor consisting of multiple Pose-Transformed Points Prediction blocks in which the global and local motions are estimated and aggregated. This framework can be proven to be equivariant to SE(3) transformation over 3D points. We evaluate the pose-transformed equivariant network on extensive datasets including human motion capture molecular dynamics modeling and dynamics simulation. Extensive experimental comparisons demonstrated our SOTA performance compared with the existing equivariant networks for 3D point trajectory prediction.

Cite

Text

Yu and Sun. "Pose-Transformed Equivariant Network for 3D Point Trajectory Prediction." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.00526

Markdown

[Yu and Sun. "Pose-Transformed Equivariant Network for 3D Point Trajectory Prediction." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/yu2024cvpr-posetransformed/) doi:10.1109/CVPR52733.2024.00526

BibTeX

@inproceedings{yu2024cvpr-posetransformed,
  title     = {{Pose-Transformed Equivariant Network for 3D Point Trajectory Prediction}},
  author    = {Yu, Ruixuan and Sun, Jian},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {5503-5512},
  doi       = {10.1109/CVPR52733.2024.00526},
  url       = {https://mlanthology.org/cvpr/2024/yu2024cvpr-posetransformed/}
}