De-Rendering the World's Revolutionary Artefacts

Abstract

Recent works have shown exciting results in unsupervised image de-rendering--learning to decompose 3D shape, appearance, and lighting from single-image collections without explicit supervision. However, many of these assume simplistic material and lighting models. We propose a method, termed RADAR, that can recover environment illumination and surface materials from real single-image collections, relying neither on explicit 3D supervision, nor on multi-view or multi-light images. Specifically, we focus on rotationally symmetric artefacts that exhibit challenging surface properties including specular reflections, such as vases. We introduce a novel self-supervised albedo discriminator, which allows the model to recover plausible albedo without requiring any ground-truth during training. In conjunction with a shape reconstruction module exploiting rotational symmetry, we present an end-to-end learning framework that is able to de-render the world's revolutionary artefacts. We conduct experiments on a real vase dataset and demonstrate compelling decomposition results, allowing for applications including free-viewpoint rendering and relighting. More results and code at: https://sorderender.github.io/.

Cite

Text

Wu et al. "De-Rendering the World's Revolutionary Artefacts." Conference on Computer Vision and Pattern Recognition, 2021.

Markdown

[Wu et al. "De-Rendering the World's Revolutionary Artefacts." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/wu2021cvpr-derendering/)

BibTeX

@inproceedings{wu2021cvpr-derendering,
  title     = {{De-Rendering the World's Revolutionary Artefacts}},
  author    = {Wu, Shangzhe and Makadia, Ameesh and Wu, Jiajun and Snavely, Noah and Tucker, Richard and Kanazawa, Angjoo},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {6338-6347},
  url       = {https://mlanthology.org/cvpr/2021/wu2021cvpr-derendering/}
}