Self-Guided Novel View Synthesis via Elastic Displacement Network
Abstract
Synthesizing a novel view from different viewpoints has been an essential problem in 3D vision. Among a variety of view synthesis tasks, single image based view synthesis is particularly challenging. Recent works address this problem by a fixed number of image planes of discrete disparities, which tend to generate structurally inconsistent results on wide-baseline, scene-complicated datasets such as KITTI. In this paper, we propose the Self-Guided Elastic Displacement Network (SG-EDN), which explicitly models the geometric transformation by a novel non-discrete scene representation called layered displacement maps (LDM). To generate realistic views, we exploit the positional characteristics of the displacement maps and design a multi-scale structural pyramid for self-guided filtering on the displacement maps. To optimize efficiency and scene-adaptivity, we allow the effective range of each displacement map to be elastic, with fully learnable parameters. Experimental results confirm that our framework outperforms existing methods in both quantitative and qualitative tests.
Cite
Text
Liu et al. "Self-Guided Novel View Synthesis via Elastic Displacement Network." Winter Conference on Applications of Computer Vision, 2020.Markdown
[Liu et al. "Self-Guided Novel View Synthesis via Elastic Displacement Network." Winter Conference on Applications of Computer Vision, 2020.](https://mlanthology.org/wacv/2020/liu2020wacv-selfguided/)BibTeX
@inproceedings{liu2020wacv-selfguided,
title = {{Self-Guided Novel View Synthesis via Elastic Displacement Network}},
author = {Liu, Yicun and Zhang, Jiawei and Ma, Ye and Ren, Jimmy},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2020},
url = {https://mlanthology.org/wacv/2020/liu2020wacv-selfguided/}
}