Photometric Bundle Adjustment for Dense Multi-View 3D Modeling
Abstract
Motivated by a Bayesian vision of the 3D multi-view reconstruction from images problem, we propose a dense 3D reconstruction technique that jointly refines the shape and the camera parameters of a scene by minimizing the photometric reprojection error between a generated model and the observed images, hence considering all pixels in the original images. The minimization is performed using a gradient descent scheme coherent with the shape representation (here a triangular mesh), where we derive evolution equations in order to optimize both the shape and the camera parameters. This can be used at a last refinement step in 3D reconstruction pipelines and helps improving the 3D reconstruction's quality by estimating the 3D shape and camera calibration more accurately. Examples are shown for multi-view stereo where the texture is also jointly optimized and improved, but could be used for any generative approaches dealing with multi-view reconstruction settings (i.e. depth map fusion, multi-view photometric stereo).
Cite
Text
Delaunoy and Pollefeys. "Photometric Bundle Adjustment for Dense Multi-View 3D Modeling." Conference on Computer Vision and Pattern Recognition, 2014. doi:10.1109/CVPR.2014.193Markdown
[Delaunoy and Pollefeys. "Photometric Bundle Adjustment for Dense Multi-View 3D Modeling." Conference on Computer Vision and Pattern Recognition, 2014.](https://mlanthology.org/cvpr/2014/delaunoy2014cvpr-photometric/) doi:10.1109/CVPR.2014.193BibTeX
@inproceedings{delaunoy2014cvpr-photometric,
title = {{Photometric Bundle Adjustment for Dense Multi-View 3D Modeling}},
author = {Delaunoy, Amael and Pollefeys, Marc},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2014},
doi = {10.1109/CVPR.2014.193},
url = {https://mlanthology.org/cvpr/2014/delaunoy2014cvpr-photometric/}
}