Lepard: Learning Partial Point Cloud Matching in Rigid and Deformable Scenes
Abstract
We present Lepard, a Learning based approach for partial point cloud matching in rigid and deformable scenes. The key characteristics are the following techniques that exploit 3D positional knowledge for point cloud matching: 1) An architecture that disentangles point cloud representation into feature space and 3D position space. 2) A position encoding method that explicitly reveals 3D relative distance information through the dot product of vectors. 3) A repositioning technique that modifies the crosspoint-cloud relative positions. Ablation studies demonstrate the effectiveness of the above techniques. In rigid cases, Lepard combined with RANSAC and ICP demonstrates state-of-the-art registration recall of 93.9% / 71.3% on the 3DMatch / 3DLoMatch. In deformable cases, Lepard achieves +27.1% / +34.8% higher non-rigid feature matching recall than the prior art on our newly constructed 4DMatch / 4DLoMatch benchmark.
Cite
Text
Li and Harada. "Lepard: Learning Partial Point Cloud Matching in Rigid and Deformable Scenes." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00547Markdown
[Li and Harada. "Lepard: Learning Partial Point Cloud Matching in Rigid and Deformable Scenes." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/li2022cvpr-lepard/) doi:10.1109/CVPR52688.2022.00547BibTeX
@inproceedings{li2022cvpr-lepard,
title = {{Lepard: Learning Partial Point Cloud Matching in Rigid and Deformable Scenes}},
author = {Li, Yang and Harada, Tatsuya},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {5554-5564},
doi = {10.1109/CVPR52688.2022.00547},
url = {https://mlanthology.org/cvpr/2022/li2022cvpr-lepard/}
}