Direct Photometric Alignment by Mesh Deformation

Abstract

The choice of motion models is vital in applications like image/video stitching and video stabilization. Conventional methods explored different approaches ranging from simple global parametric models to complex per-pixel optical flow. Mesh-based warping methods achieve a good balance between computational complexity and model flexibility. However, they typically require high quality feature correspondences and suffer from mismatches and low-textured image content. In this paper, we propose a mesh-based photometric alignment method that minimizes pixel intensity difference instead of Euclidean distance of known feature correspondences. The proposed method combines the superior performance of dense photometric alignment with the efficiency of mesh-based image warping. It achieves better global alignment quality than the feature-based counterpart in textured images, and more importantly, it is also robust to low-textured image content. Abundant experiments show that our method can handle a variety of images and videos, and outperforms representative state-of-the-art methods in both image stitching and video stabilization tasks.

Cite

Text

Lin et al. "Direct Photometric Alignment by Mesh Deformation." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.289

Markdown

[Lin et al. "Direct Photometric Alignment by Mesh Deformation." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/lin2017cvpr-direct/) doi:10.1109/CVPR.2017.289

BibTeX

@inproceedings{lin2017cvpr-direct,
  title     = {{Direct Photometric Alignment by Mesh Deformation}},
  author    = {Lin, Kaimo and Jiang, Nianjuan and Liu, Shuaicheng and Cheong, Loong-Fah and Do, Minh and Lu, Jiangbo},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2017},
  doi       = {10.1109/CVPR.2017.289},
  url       = {https://mlanthology.org/cvpr/2017/lin2017cvpr-direct/}
}