CPFN: Cascaded Primitive Fitting Networks for High-Resolution Point Clouds
Abstract
Representing human-made objects as a collection of base primitives has a long history in computer vision and reverse engineering. In the case of high-resolution point cloud scans, the challenge is to be able to detect both large primitives as well as those explaining the detailed parts. While the classical RANSAC approach requires case-specific parameter tuning, state-of-the-art networks are limited by memory consumption of their backbone modules such as PointNet++, and hence fail to detect the fine-scale primitives. We present Cascaded Primitive Fitting Networks (CPFN) that relies on an adaptive patch sampling network to assemble detection results of global and local primitive detection networks. As a key enabler, we present a merging formulation that dynamically aggregates the primitives across global and local scales. Our evaluation demonstrates that CPFN improves the state-of-the-art SPFN performance by 13-14% on high-resolution point cloud datasets and specifically improves the detection of fine-scale primitives by 20-22%. Our code is available at: https://github.com/erictuanle/CPFN
Cite
Text
Lê et al. "CPFN: Cascaded Primitive Fitting Networks for High-Resolution Point Clouds." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00736Markdown
[Lê et al. "CPFN: Cascaded Primitive Fitting Networks for High-Resolution Point Clouds." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/le2021iccv-cpfn/) doi:10.1109/ICCV48922.2021.00736BibTeX
@inproceedings{le2021iccv-cpfn,
title = {{CPFN: Cascaded Primitive Fitting Networks for High-Resolution Point Clouds}},
author = {Lê, Eric-Tuan and Sung, Minhyuk and Ceylan, Duygu and Mech, Radomir and Boubekeur, Tamy and Mitra, Niloy J.},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {7457-7466},
doi = {10.1109/ICCV48922.2021.00736},
url = {https://mlanthology.org/iccv/2021/le2021iccv-cpfn/}
}