DIFFER: Moving Beyond 3D Reconstruction with Differentiable Feature Rendering
Abstract
Perception of 3D object properties from 2D images form one of the core computer vision problems. In this work, we propose a deep learning system that can simultaneously reason about 3D shape as well as associated properties (such as color, semantic part segments) directly from a single 2D image. We devise a novel depth-aware differentiable feature rendering module (DIFFER) that is used to train our model by using only 2D supervision. Experiments on both synthetic ShapeNet dataset and the real-world Pix3D dataset demonstrate that our 2D supervised DIFFER model performs on par or sometimes even outperforms existing 3D supervised models.
Cite
Text
Navaneet et al. "DIFFER: Moving Beyond 3D Reconstruction with Differentiable Feature Rendering." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.Markdown
[Navaneet et al. "DIFFER: Moving Beyond 3D Reconstruction with Differentiable Feature Rendering." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/l2019cvprw-differ/)BibTeX
@inproceedings{l2019cvprw-differ,
title = {{DIFFER: Moving Beyond 3D Reconstruction with Differentiable Feature Rendering}},
author = {Navaneet, K. L. and Mandikal, Priyanka and Jampani, Varun and Babu, R. Venkatesh},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2019},
pages = {18-24},
url = {https://mlanthology.org/cvprw/2019/l2019cvprw-differ/}
}