FacadeNet: Conditional Facade Synthesis via Selective Editing

Abstract

We introduce FacadeNet, a deep learning approach for synthesizing building facade images from diverse viewpoints. Our method employs a conditional GAN, taking a single view of a facade along with the desired viewpoint information and generates an image of the facade from the distinct viewpoint. To precisely modify view-dependent elements like windows and doors while preserving the structure of view-independent components such as walls, we introduce a selective editing module. This module leverages image embeddings extracted from a pretrained vision transformer Our experiments demonstrated state-of-the-art performance on building facade generation, surpassing alternative methods.

Cite

Text

Georgiou et al. "FacadeNet: Conditional Facade Synthesis via Selective Editing." Winter Conference on Applications of Computer Vision, 2024.

Markdown

[Georgiou et al. "FacadeNet: Conditional Facade Synthesis via Selective Editing." Winter Conference on Applications of Computer Vision, 2024.](https://mlanthology.org/wacv/2024/georgiou2024wacv-facadenet/)

BibTeX

@inproceedings{georgiou2024wacv-facadenet,
  title     = {{FacadeNet: Conditional Facade Synthesis via Selective Editing}},
  author    = {Georgiou, Yiangos and Loizou, Marios and Kelly, Tom and Averkiou, Melinos},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2024},
  pages     = {5384-5393},
  url       = {https://mlanthology.org/wacv/2024/georgiou2024wacv-facadenet/}
}