Novel-View Synthesis of Human Tourist Photos
Abstract
We present a novel framework for performing novel-view synthesis on human tourist photos. Given a tourist photo from a known scene, we reconstruct the photo in 3D space through modeling the human and the background independently. We generate a deep buffer from a novel view point of the reconstruction and utilize a deep network to translate the buffer into a photo realistic rendering of the novel view. We additionally present a method to relight the renderings, allowing for relighting of both human and background to match either the provided input image or any other. The key contributions of our paper are: 1) a framework for performing novel view synthesis on human tourist photos, 2) an appearance transfer method for relighting of humans to match synthesized backgrounds, and 3) a method for estimating lighting properties from a single human photo.
Cite
Text
Freer et al. "Novel-View Synthesis of Human Tourist Photos." Winter Conference on Applications of Computer Vision, 2022.Markdown
[Freer et al. "Novel-View Synthesis of Human Tourist Photos." Winter Conference on Applications of Computer Vision, 2022.](https://mlanthology.org/wacv/2022/freer2022wacv-novelview/)BibTeX
@inproceedings{freer2022wacv-novelview,
title = {{Novel-View Synthesis of Human Tourist Photos}},
author = {Freer, Jonathan and Yi, Kwang Moo and Jiang, Wei and Choi, Jongwon and Chang, Hyung Jin},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2022},
pages = {3069-3076},
url = {https://mlanthology.org/wacv/2022/freer2022wacv-novelview/}
}