View Generalization for Single Image Textured 3D Models
Abstract
Humans can easily infer the underlying 3D geometry and texture of an object only from a single 2D image. Current computer vision methods can do this, too, but suffer from view generalization problems -- the models inferred tend to make poor predictions of appearance in novel views. As for generalization problems in machine learning, the difficulty is balancing single-view accuracy (cf. training error; bias) with novel view accuracy (cf. test error; variance). We describe a class of models whose geometric rigidity is easily controlled to manage this tradeoff. We describe a cycle consistency loss that improves view generalization (roughly, a model from a generated view should predict the original view well). View generalization of textures requires that models share texture information, so a car seen from the back still has headlights because other cars have headlights. We describe a cycle consistency loss that encourages model textures to be aligned, so as to encourage sharing. We compare our method against the state-of-the-art method and show both qualitative and quantitative improvements.
Cite
Text
Bhattad et al. "View Generalization for Single Image Textured 3D Models." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00602Markdown
[Bhattad et al. "View Generalization for Single Image Textured 3D Models." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/bhattad2021cvpr-view/) doi:10.1109/CVPR46437.2021.00602BibTeX
@inproceedings{bhattad2021cvpr-view,
title = {{View Generalization for Single Image Textured 3D Models}},
author = {Bhattad, Anand and Dundar, Aysegul and Liu, Guilin and Tao, Andrew and Catanzaro, Bryan},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2021},
pages = {6081-6090},
doi = {10.1109/CVPR46437.2021.00602},
url = {https://mlanthology.org/cvpr/2021/bhattad2021cvpr-view/}
}