VQ3D: Learning a 3D-Aware Generative Model on ImageNet

Abstract

Recent work has shown the possibility of training generative models of 3D content from 2D image collections on small datasets corresponding to a single object class, such as human faces, animal faces, or cars. However, these models struggle on larger, more complex datasets. To model diverse and unconstrained image collections such as ImageNet, we present VQ3D, which introduces a NeRF-based decoder into a two-stage vector-quantized autoencoder. Our Stage 1 allows for the reconstruction of an input image and the ability to change the camera position around the image, and our Stage 2 allows for the generation of new 3D scenes. VQ3D is capable of generating and reconstructing 3D-aware images from the 1000-class ImageNet dataset of 1.2 million training images, and achieves a competitive ImageNet generation FID score of 16.8.

Cite

Text

Sargent et al. "VQ3D: Learning a 3D-Aware Generative Model on ImageNet." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.00391

Markdown

[Sargent et al. "VQ3D: Learning a 3D-Aware Generative Model on ImageNet." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/sargent2023iccv-vq3d/) doi:10.1109/ICCV51070.2023.00391

BibTeX

@inproceedings{sargent2023iccv-vq3d,
  title     = {{VQ3D: Learning a 3D-Aware Generative Model on ImageNet}},
  author    = {Sargent, Kyle and Koh, Jing Yu and Zhang, Han and Chang, Huiwen and Herrmann, Charles and Srinivasan, Pratul and Wu, Jiajun and Sun, Deqing},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {4240-4250},
  doi       = {10.1109/ICCV51070.2023.00391},
  url       = {https://mlanthology.org/iccv/2023/sargent2023iccv-vq3d/}
}