Example-Based Facade Texture Synthesis
Abstract
There is an increased interest in the efficient creation of city models, be it virtual or as-built. We present a method for synthesizing complex, photo-realistic facade images, from a single example. After parsing the example image into its semantic components, a tiling for it is generated. Novel tilings can then be created, yielding facade textures with different dimensions or with occluded parts inpainted. A genetic algorithm guides the novel facades as well as inpainted parts to be consistent with the example, both in terms of their overall structure and their detailed textures. Promising results for multiple standard datasets in particular for the different building styles they contain demonstrate the potential of the method.
Cite
Text
Dai et al. "Example-Based Facade Texture Synthesis." International Conference on Computer Vision, 2013. doi:10.1109/ICCV.2013.136Markdown
[Dai et al. "Example-Based Facade Texture Synthesis." International Conference on Computer Vision, 2013.](https://mlanthology.org/iccv/2013/dai2013iccv-examplebased/) doi:10.1109/ICCV.2013.136BibTeX
@inproceedings{dai2013iccv-examplebased,
title = {{Example-Based Facade Texture Synthesis}},
author = {Dai, Dengxin and Riemenschneider, Hayko and Schmitt, Gerhard and Van Gool, Luc},
booktitle = {International Conference on Computer Vision},
year = {2013},
doi = {10.1109/ICCV.2013.136},
url = {https://mlanthology.org/iccv/2013/dai2013iccv-examplebased/}
}