MatryODShka: Real-Time 6DoF Video View Synthesis Using Multi-Sphere Images

Abstract

We introduce a method to convert stereo 360 (omnidirectional stereo) imagery into a layered, multi-sphere image representation for six degree-of-freedom (6DoF) rendering. Stereo 360 imagery can be captured from multi-camera systems for virtual reality (VR) rendering, but lacks motion parallax and correct-in-all-directions disparity cues. Together, these can quickly lead to VR sickness when viewing content. One solution is to try and generate a format suitable for 6DoF rendering, such as by estimating depth. However, this raises questions as to how to handle disoccluded regions in dynamic scenes. Our approach is to simultaneously learn depth and blending weights via a multi-sphere image representation, which can be rendered with correct 6DoF disparity and motion parallax in VR. This significantly improves comfort for the viewer, and can be inferred and rendered in real time on modern GPU hardware. Together, these move towards making VR video a more comfortable immersive medium.

Cite

Text

Attal et al. "MatryODShka: Real-Time 6DoF Video View Synthesis Using Multi-Sphere Images." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58452-8_26

Markdown

[Attal et al. "MatryODShka: Real-Time 6DoF Video View Synthesis Using Multi-Sphere Images." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/attal2020eccv-matryodshka/) doi:10.1007/978-3-030-58452-8_26

BibTeX

@inproceedings{attal2020eccv-matryodshka,
  title     = {{MatryODShka: Real-Time 6DoF Video View Synthesis Using Multi-Sphere Images}},
  author    = {Attal, Benjamin and Ling, Selena and Gokaslan, Aaron and Richardt, Christian and Tompkin, James},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58452-8_26},
  url       = {https://mlanthology.org/eccv/2020/attal2020eccv-matryodshka/}
}