Robust Click-Point Linking: Matching Visually Dissimilar Local Regions

Abstract

This paper presents robust click-point linking: a novel localized registration framework that allows users to interactively prescribe where the accuracy has to be high. By emphasizing locality and interactivity, our solution is faithful to how the registration results are used in practice. Given a user-specified point, the click-point linking provides a single point-wise correspondence between a data pair. In order to link visually dissimilar local regions, a correspondence is sought by using only geometrical context without comparing the local appearances. Our solution is formulated as a maximum likelihood estimation (MLE) without estimating a domain transformation explicitly. A spatial likelihood of Gaussian mixture form is designed to capture geometrical configurations between the point-of-interest and a hierarchy of global-to-local 3D landmarks that are detected using machine learning and entropy based feature detectors. A closed-form formula is derived to specify each Gaussian component by exploiting geometric in-variances under specific group of domain transformation via RANSAC-like random sampling. A mean shift algorithm is applied to robustly and efficiently solve the local MLE problem, replacing the standard consensus step of the RANSAC. Two transformation groups of pure translation and scaling/translation are considered in this paper. We test feasibility of the proposed approach with 16 pairs of whole-body CT data, demonstrating the effectiveness.

Cite

Text

Okada and Huang. "Robust Click-Point Linking: Matching Visually Dissimilar Local Regions." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2007. doi:10.1109/CVPR.2007.383360

Markdown

[Okada and Huang. "Robust Click-Point Linking: Matching Visually Dissimilar Local Regions." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2007.](https://mlanthology.org/cvpr/2007/okada2007cvpr-robust/) doi:10.1109/CVPR.2007.383360

BibTeX

@inproceedings{okada2007cvpr-robust,
  title     = {{Robust Click-Point Linking: Matching Visually Dissimilar Local Regions}},
  author    = {Okada, Kazunori and Huang, Xiaolei},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2007},
  doi       = {10.1109/CVPR.2007.383360},
  url       = {https://mlanthology.org/cvpr/2007/okada2007cvpr-robust/}
}