Efficient Hierarchical Entropy Model for Learned Point Cloud Compression
Abstract
Learning an accurate entropy model is a fundamental way to remove the redundancy in point cloud compression. Recently, the octree-based auto-regressive entropy model which adopts the self-attention mechanism to explore dependencies in a large-scale context is proved to be promising. However, heavy global attention computations and auto-regressive contexts are inefficient for practical applications. To improve the efficiency of the attention model, we propose a hierarchical attention structure that has a linear complexity to the context scale and maintains the global receptive field. Furthermore, we present a grouped context structure to address the serial decoding issue caused by the auto-regression while preserving the compression performance. Experiments demonstrate that the proposed entropy model achieves superior rate-distortion performance and significant decoding latency reduction compared with the state-of-the-art large-scale auto-regressive entropy model.
Cite
Text
Song et al. "Efficient Hierarchical Entropy Model for Learned Point Cloud Compression." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.01381Markdown
[Song et al. "Efficient Hierarchical Entropy Model for Learned Point Cloud Compression." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/song2023cvpr-efficient/) doi:10.1109/CVPR52729.2023.01381BibTeX
@inproceedings{song2023cvpr-efficient,
title = {{Efficient Hierarchical Entropy Model for Learned Point Cloud Compression}},
author = {Song, Rui and Fu, Chunyang and Liu, Shan and Li, Ge},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {14368-14377},
doi = {10.1109/CVPR52729.2023.01381},
url = {https://mlanthology.org/cvpr/2023/song2023cvpr-efficient/}
}