Light Field Synthesis from a Monocular Image Using Variable LDI

Abstract

Recent advancements in learning-based novel view synthesis enable users to synthesize light field from a monocular image without special equipment. Moreover, the state-of-the-art techniques including multiplane image (MPI) show outstanding performance in synthesizing accurate light field from a monocular image. In this study, we propose a new variable layered depth image (VLDI) representation to generate precise light field synthesis results using only a few layers. Our method exploits LDI representation built on a new two-stream halfway fusion network and transformation process. This framework has an efficient structure that directly generates the region that does not require network prediction from inputs. As a result, the proposed method allows us to acquire high-quality light field easily and quickly. Experimental results show that the proposed method outperforms the previous works quantitatively and qualitatively for diverse examples.

Cite

Text

Bak and Park. "Light Field Synthesis from a Monocular Image Using Variable LDI." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023. doi:10.1109/CVPRW59228.2023.00342

Markdown

[Bak and Park. "Light Field Synthesis from a Monocular Image Using Variable LDI." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023.](https://mlanthology.org/cvprw/2023/bak2023cvprw-light/) doi:10.1109/CVPRW59228.2023.00342

BibTeX

@inproceedings{bak2023cvprw-light,
  title     = {{Light Field Synthesis from a Monocular Image Using Variable LDI}},
  author    = {Bak, Junhyeong and Park, In Kyu},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2023},
  pages     = {3399-3407},
  doi       = {10.1109/CVPRW59228.2023.00342},
  url       = {https://mlanthology.org/cvprw/2023/bak2023cvprw-light/}
}