TorchSparse++: Efficient Point Cloud Engine
Abstract
Point cloud computation has become an increasingly more important workload for autonomous driving and other applications. Unlike dense 2D computation, point cloud convolution has sparse and irregular computation patterns and thus requires dedicated inference system support with specialized high-performance kernels. While existing point cloud deep learning libraries have developed different dataflows for convolution on point clouds, they assume a single dataflow throughout the execution of the entire model. In this work, we systematically analyze and improve existing dataflows. Our resulting system, TorchSparse++, achieves 2.9×, 3.3×, 2.2× and 1.8× measured end-to-end speedup on an NVIDIA A100 GPU over the state-of-the-art MinkowskiEngine, SpConv 1.2, TorchSparse and SpConv v2 in inference respectively. Furthermore, TorchSparse++ is the only system to date that supports all necessary primitives for 3D segmentation, detection, and reconstruction workloads in autonomous driving. Code is publicly released at https://github.com/mit-han-lab/torchsparse.
Cite
Text
Tang et al. "TorchSparse++: Efficient Point Cloud Engine." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023. doi:10.1109/CVPRW59228.2023.00025Markdown
[Tang et al. "TorchSparse++: Efficient Point Cloud Engine." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023.](https://mlanthology.org/cvprw/2023/tang2023cvprw-torchsparse/) doi:10.1109/CVPRW59228.2023.00025BibTeX
@inproceedings{tang2023cvprw-torchsparse,
title = {{TorchSparse++: Efficient Point Cloud Engine}},
author = {Tang, Haotian and Yang, Shang and Liu, Zhijian and Hong, Ke and Yu, Zhongming and Li, Xiuyu and Dai, Guohao and Wang, Yu and Han, Song},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2023},
pages = {202-209},
doi = {10.1109/CVPRW59228.2023.00025},
url = {https://mlanthology.org/cvprw/2023/tang2023cvprw-torchsparse/}
}