Superresolution Texture Maps for Multiview Reconstruction

Abstract

We study the scenario of a multiview setting, where several calibrated views of a textured object with known surface geometry are available. The objective is to estimate a diffuse texture map as precisely as possible. A superresolution image formation model based on the camera properties leads to a total variation energy for the desired texture map, which can be recovered as the minimizer of the functional by solving the Euler-Lagrange equation on the surface. The PDE is transformed to planar texture space via an automatically created conformal atlas, where it can be solved using total variation deblurring. The proposed approach allows to recover a high-resolution, high-quality texture map even from lower-resolution photographs, which is of interest for a variety of image-based modeling applications.

Cite

Text

Goldlücke and Cremers. "Superresolution Texture Maps for Multiview Reconstruction." IEEE/CVF International Conference on Computer Vision, 2009. doi:10.1109/ICCV.2009.5459378

Markdown

[Goldlücke and Cremers. "Superresolution Texture Maps for Multiview Reconstruction." IEEE/CVF International Conference on Computer Vision, 2009.](https://mlanthology.org/iccv/2009/goldlucke2009iccv-superresolution/) doi:10.1109/ICCV.2009.5459378

BibTeX

@inproceedings{goldlucke2009iccv-superresolution,
  title     = {{Superresolution Texture Maps for Multiview Reconstruction}},
  author    = {Goldlücke, Bastian and Cremers, Daniel},
  booktitle = {IEEE/CVF International Conference on Computer Vision},
  year      = {2009},
  pages     = {1677-1684},
  doi       = {10.1109/ICCV.2009.5459378},
  url       = {https://mlanthology.org/iccv/2009/goldlucke2009iccv-superresolution/}
}