CodeNeRF: Disentangled Neural Radiance Fields for Object Categories

Abstract

CodeNeRF is an implicit 3D neural representation that learns the variation of object shapes and textures across a category and can be trained, from a set of posed images, to synthesize novel views of unseen objects. Unlike the original NeRF, which is scene specific, CodeNeRF learns to disentangle shape and texture by learning separate embeddings. At test time, given a single unposed image of an unseen object, CodeNeRF jointly estimates camera viewpoint, and shape and appearance codes via optimization. Unseen objects can be reconstructed from a single image, and then rendered from new viewpoints or their shape and texture edited by varying the latent codes. We conduct experiments on the SRN benchmark, which show that CodeNeRF generalises well to unseen objects and achieves on-par performance with methods that require known camera pose at test time. Our results on real-world images demonstrate that CodeNeRF can bridge the sim-to-real gap. Project page: https://github.com/wayne1123/code-nerf

Cite

Text

Jang and Agapito. "CodeNeRF: Disentangled Neural Radiance Fields for Object Categories." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.01271

Markdown

[Jang and Agapito. "CodeNeRF: Disentangled Neural Radiance Fields for Object Categories." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/jang2021iccv-codenerf/) doi:10.1109/ICCV48922.2021.01271

BibTeX

@inproceedings{jang2021iccv-codenerf,
  title     = {{CodeNeRF: Disentangled Neural Radiance Fields for Object Categories}},
  author    = {Jang, Wonbong and Agapito, Lourdes},
  booktitle = {International Conference on Computer Vision},
  year      = {2021},
  pages     = {12949-12958},
  doi       = {10.1109/ICCV48922.2021.01271},
  url       = {https://mlanthology.org/iccv/2021/jang2021iccv-codenerf/}
}