Multi-Scale Local Implicit Keypoint Descriptor for Keypoint Matching
Abstract
We investigate the potential of multi-scale descriptors which has been under-explored in the existing literature. At the pixel level, we propose utilizing both coarse and fine-grained descriptors and present a scale-aware method of negative sampling, which trains descriptors at different scales in a complementary manner, thereby improving their discriminative power. For sub-pixel level descriptors, we also propose adopting coordinate-based implicit modeling and learning the non-linearity of local descriptors on continuous-domain coordinates. Our experiments show that the proposed method achieves state-of-the-art performance on various tasks, i.e., image matching, relative pose estimation, and visual localization.
Cite
Text
Lee et al. "Multi-Scale Local Implicit Keypoint Descriptor for Keypoint Matching." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023. doi:10.1109/CVPRW59228.2023.00654Markdown
[Lee et al. "Multi-Scale Local Implicit Keypoint Descriptor for Keypoint Matching." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023.](https://mlanthology.org/cvprw/2023/lee2023cvprw-multiscale/) doi:10.1109/CVPRW59228.2023.00654BibTeX
@inproceedings{lee2023cvprw-multiscale,
title = {{Multi-Scale Local Implicit Keypoint Descriptor for Keypoint Matching}},
author = {Lee, JongMin and Park, Eunhyeok and Yoo, Sungjoo},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2023},
pages = {6145-6154},
doi = {10.1109/CVPRW59228.2023.00654},
url = {https://mlanthology.org/cvprw/2023/lee2023cvprw-multiscale/}
}