ZIGNeRF: Zero-Shot 3D Scene Representation with Invertible Generative Neural Radiance Fields

Abstract

Generative Neural Radiance Fields (NeRFs) have demonstrated remarkable proficiency in synthesizing multi-view images by learning the distribution of a set of unposed images. Despite the aptitude of existing Generative NeRFs in generating 3D-consistent high-quality random samples within data distribution, the creation of a 3D representation of a singular input image remains a formidable challenge. In this manuscript, we introduce ZIGNeRF, an innovative model that executes zero-shot Generative Adversarial Network (GAN) inversion for the generation of multi-view images from a single out-of-distribution image. The model is underpinned by a novel inverter that maps out-of-domain images into the latent code of the generator manifold. Notably, ZIGNeRF is capable of disentangling the object from the background and executing 3D operations such as 360-degree rotation or depth and horizontal translation. The efficacy of our model is validated using multiple real-image datasets: Cats, AFHQ, CelebA, CelebA-HQ, and CompCars.

Cite

Text

Ko and Lee. "ZIGNeRF: Zero-Shot 3D Scene Representation with Invertible Generative Neural Radiance Fields." Winter Conference on Applications of Computer Vision, 2024.

Markdown

[Ko and Lee. "ZIGNeRF: Zero-Shot 3D Scene Representation with Invertible Generative Neural Radiance Fields." Winter Conference on Applications of Computer Vision, 2024.](https://mlanthology.org/wacv/2024/ko2024wacv-zignerf/)

BibTeX

@inproceedings{ko2024wacv-zignerf,
  title     = {{ZIGNeRF: Zero-Shot 3D Scene Representation with Invertible Generative Neural Radiance Fields}},
  author    = {Ko, Kanghyeok and Lee, Minhyeok},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2024},
  pages     = {4986-4995},
  url       = {https://mlanthology.org/wacv/2024/ko2024wacv-zignerf/}
}