SinGRAF: Learning a 3D Generative Radiance Field for a Single Scene

Abstract

Generative models have shown great promise in synthesizing photorealistic 3D objects, but they require large amounts of training data. We introduce SinGRAF, a 3D-aware generative model that is trained with a few input images of a single scene. Once trained, SinGRAF generates different realizations of this 3D scene that preserve the appearance of the input while varying scene layout. For this purpose, we build on recent progress in 3D GAN architectures and introduce a novel progressive-scale patch discrimination approach during training. With several experiments, we demonstrate that the results produced by SinGRAF outperform the closest related works in both quality and diversity by a large margin.

Cite

Text

Son et al. "SinGRAF: Learning a 3D Generative Radiance Field for a Single Scene." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00822

Markdown

[Son et al. "SinGRAF: Learning a 3D Generative Radiance Field for a Single Scene." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/son2023cvpr-singraf/) doi:10.1109/CVPR52729.2023.00822

BibTeX

@inproceedings{son2023cvpr-singraf,
  title     = {{SinGRAF: Learning a 3D Generative Radiance Field for a Single Scene}},
  author    = {Son, Minjung and Park, Jeong Joon and Guibas, Leonidas and Wetzstein, Gordon},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {8507-8517},
  doi       = {10.1109/CVPR52729.2023.00822},
  url       = {https://mlanthology.org/cvpr/2023/son2023cvpr-singraf/}
}