Space-Sampling Method for 3D Cinemas
Abstract
A new approach to 3D Cinematography based on Space-Sampling method is presented. This method treats a projected 3D image space instead of a linear 3D space. It well matches with the visual perception and nicely reduces the data size needed for 3D scenes. It also matches with the 3D camera-shooting and the 3D displays requiring no eyewear. Space-Sampling method captures 3D natural scenes through a lens with transparent image-sensor layers. Hence, the 3D image space is directly sampled at a video-rate with single camera. Captured images are rendered on a transparent multi-layer display. Thus, a truly volumetric image is observed without eyeglasses. Combination rendering of these natural scenes with 3D computer-graphic (CG) objects creates a realistic Synthesized & Natural Hybrid scene at a reasonable cost. Depth from Focus technology and Depth-Fuse anti-aliasing technology were used to eliminate the blurred objects and to enhance the depth resolution, respectively.
Cite
Text
Senoh et al. "Space-Sampling Method for 3D Cinemas." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2006. doi:10.1109/CVPRW.2006.195Markdown
[Senoh et al. "Space-Sampling Method for 3D Cinemas." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2006.](https://mlanthology.org/cvprw/2006/senoh2006cvprw-spacesampling/) doi:10.1109/CVPRW.2006.195BibTeX
@inproceedings{senoh2006cvprw-spacesampling,
title = {{Space-Sampling Method for 3D Cinemas}},
author = {Senoh, Takanori and Aoki, Terumasa and Yasuda, Hiroshi and Kogure, Takuyo},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2006},
pages = {170},
doi = {10.1109/CVPRW.2006.195},
url = {https://mlanthology.org/cvprw/2006/senoh2006cvprw-spacesampling/}
}