msLPCC: A Multimodal-Driven Scalable Framework for Deep LiDAR Point Cloud Compression
Abstract
LiDAR sensors are widely used in autonomous driving, and the growing storage and transmission demands have made LiDAR point cloud compression (LPCC) a hot research topic. To address the challenges posed by the large-scale and uneven-distribution (spatial and categorical) of LiDAR point data, this paper presents a new multimodal-driven scalable LPCC framework. For the large-scale challenge, we decouple the original LiDAR data into multi-layer point subsets, compress and transmit each layer separately, so as to ensure the reconstruction quality requirement under different scenarios. For the uneven-distribution challenge, we extract, align, and fuse heterologous feature representations, including point modality with position information, depth modality with spatial distance information, and segmentation modality with category information. Extensive experimental results on the benchmark SemanticKITTI database validate that our method outperforms 14 recent representative LPCC methods.
Cite
Text
Wang et al. "msLPCC: A Multimodal-Driven Scalable Framework for Deep LiDAR Point Cloud Compression." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I6.28362Markdown
[Wang et al. "msLPCC: A Multimodal-Driven Scalable Framework for Deep LiDAR Point Cloud Compression." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/wang2024aaai-mslpcc/) doi:10.1609/AAAI.V38I6.28362BibTeX
@inproceedings{wang2024aaai-mslpcc,
title = {{msLPCC: A Multimodal-Driven Scalable Framework for Deep LiDAR Point Cloud Compression}},
author = {Wang, Miaohui and Huang, Runnan and Dong, Hengjin and Lin, Di and Song, Yun and Xie, Wuyuan},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {5526-5534},
doi = {10.1609/AAAI.V38I6.28362},
url = {https://mlanthology.org/aaai/2024/wang2024aaai-mslpcc/}
}