OrthoPlanes: A Novel Representation for Better 3D-Awareness of GANs

Abstract

We present a new method for generating realistic and view-consistent images with fine geometry from 2D image collections. Our method proposes a hybrid explicit-implicit representation called OrthoPlanes, which encodes fine-grained 3D information in feature maps that can be efficiently generated by modifying 2D StyleGANs. Compared to previous representations, our method has better scalability and expressiveness with clear and explicit information. As a result, our method can handle more challenging view-angles and synthesize articulated objects with high spatial degree of freedom. Experiments demonstrate that our method achieves state-of-the-art results on FFHQ and SHHQ datasets, both quantitatively and qualitatively.

Cite

Text

He et al. "OrthoPlanes: A Novel Representation for Better 3D-Awareness of GANs." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.02102

Markdown

[He et al. "OrthoPlanes: A Novel Representation for Better 3D-Awareness of GANs." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/he2023iccv-orthoplanes/) doi:10.1109/ICCV51070.2023.02102

BibTeX

@inproceedings{he2023iccv-orthoplanes,
  title     = {{OrthoPlanes: A Novel Representation for Better 3D-Awareness of GANs}},
  author    = {He, Honglin and Yang, Zhuoqian and Li, Shikai and Dai, Bo and Wu, Wayne},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {22996-23007},
  doi       = {10.1109/ICCV51070.2023.02102},
  url       = {https://mlanthology.org/iccv/2023/he2023iccv-orthoplanes/}
}