Prototypical Transformer as Unified Motion Learners
Abstract
In this work, we introduce the Prototypical Transformer (ProtoFormer), a general and unified framework that approaches various motion tasks from a prototype perspective. ProtoFormer seamlessly integrates prototype learning with Transformer by thoughtfully considering motion dynamics, introducing two innovative designs. First, Cross-Attention Prototyping discovers prototypes based on signature motion patterns, providing transparency in understanding motion scenes. Second, Latent Synchronization guides feature representation learning via prototypes, effectively mitigating the problem of motion uncertainty. Empirical results demonstrate that our approach achieves competitive performance on popular motion tasks such as optical flow and scene depth. Furthermore, it exhibits generality across various downstream tasks, including object tracking and video stabilization.
Cite
Text
Han et al. "Prototypical Transformer as Unified Motion Learners." International Conference on Machine Learning, 2024.Markdown
[Han et al. "Prototypical Transformer as Unified Motion Learners." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/han2024icml-prototypical/)BibTeX
@inproceedings{han2024icml-prototypical,
title = {{Prototypical Transformer as Unified Motion Learners}},
author = {Han, Cheng and Lu, Yawen and Sun, Guohao and Liang, James Chenhao and Cao, Zhiwen and Wang, Qifan and Guan, Qiang and Dianat, Sohail and Rao, Raghuveer and Geng, Tong and Tao, Zhiqiang and Liu, Dongfang},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {17416-17436},
volume = {235},
url = {https://mlanthology.org/icml/2024/han2024icml-prototypical/}
}