Multiview Texture Models
Abstract
Mapping textured images on smoothly approximated surfaces is often used to conceal the loss of their real, fine-grained relief. A limitation of mapping a fixed texture in such cases is that it will only be correct for one viewing and one illumination direction. The presence of geometric surface details causes changes that simple foreshortening and global color scaling cannot model well. Hence, one would like to synthesize different textures for different viewing conditions. A texture model is presented that takes account of viewpoint dependent changes in texture appearance. It is highly compact and avoids copy-and-paste like repetitions. The model is learned from example images taken from different viewpoints. It supports texture synthesis for previously unseen conditions.
Cite
Text
Zalesny and Van Gool. "Multiview Texture Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2001. doi:10.1109/CVPR.2001.990530Markdown
[Zalesny and Van Gool. "Multiview Texture Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2001.](https://mlanthology.org/cvpr/2001/zalesny2001cvpr-multiview/) doi:10.1109/CVPR.2001.990530BibTeX
@inproceedings{zalesny2001cvpr-multiview,
title = {{Multiview Texture Models}},
author = {Zalesny, Alexey and Van Gool, Luc},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2001},
pages = {I:615-622},
doi = {10.1109/CVPR.2001.990530},
url = {https://mlanthology.org/cvpr/2001/zalesny2001cvpr-multiview/}
}