Acquiring Dynamic Light Fields Through Coded Aperture Camera

Abstract

We investigate the problem of compressive acquisition of a dynamic light field. A promising solution for compressive light field acquisition is to use a coded aperture camera, with which an entire light field can be computationally reconstructed from several images captured through differently-coded aperture patterns. With this method, it was assumed that the scene should not move throughout the complete acquisition process, which restricted real applications. In this study, however, we assume that the target scene may change over time, and propose a method for acquiring a dynamic light field (a moving scene) using a coded aperture camera and a convolutional neural network (CNN). To successfully handle scene motions, we develop a new configuration of image observation, called V-shape observation, and train the CNN using a dynamic-light-field dataset with pseudo motions. Our method is validated through experiments using both a computer-generated scene and a real camera.

Cite

Text

Sakai et al. "Acquiring Dynamic Light Fields Through Coded Aperture Camera." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58529-7_22

Markdown

[Sakai et al. "Acquiring Dynamic Light Fields Through Coded Aperture Camera." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/sakai2020eccv-acquiring/) doi:10.1007/978-3-030-58529-7_22

BibTeX

@inproceedings{sakai2020eccv-acquiring,
  title     = {{Acquiring Dynamic Light Fields Through Coded Aperture Camera}},
  author    = {Sakai, Kohei and Takahashi, Keita and Fujii, Toshiaki and Nagahara, Hajime},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58529-7_22},
  url       = {https://mlanthology.org/eccv/2020/sakai2020eccv-acquiring/}
}