Point Primitive Transformer for Long-Term 4D Point Cloud Video Understanding
Abstract
This paper proposes a 4D backbone for long-term point cloud video understanding. A typical way to capture spatial-temporal context is using 4Dconv or transformer without hierarchy. However, those methods are neither effective nor efficient enough due to camera motion, scene changes, sampling patterns, and complexity of 4D data. To address those issues, we leverage the primitive plane as mid-level representation to capture the long-term spatial-temporal context in 4D point cloud videos, and propose a novel hierarchical backbone named Point Primitive Transformer(PPTr), which is mainly composed of intra-primitive point transformers and primitive transformers. Extensive experiments show that PPTr outperforms the previous state of the arts on different tasks.
Cite
Text
Wen et al. "Point Primitive Transformer for Long-Term 4D Point Cloud Video Understanding." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19818-2_2Markdown
[Wen et al. "Point Primitive Transformer for Long-Term 4D Point Cloud Video Understanding." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/wen2022eccv-point/) doi:10.1007/978-3-031-19818-2_2BibTeX
@inproceedings{wen2022eccv-point,
title = {{Point Primitive Transformer for Long-Term 4D Point Cloud Video Understanding}},
author = {Wen, Hao and Liu, Yunze and Huang, Jingwei and Duan, Bo and Yi, Li},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-19818-2_2},
url = {https://mlanthology.org/eccv/2022/wen2022eccv-point/}
}