An Epipolar Volume Autoencoder with Adversarial Loss for Deep Light Field Super-Resolution
Abstract
When capturing a light field of a scene, one typically faces a trade-off between more spatial or more angular resolution. Fortunately, light fields are also a rich source of information for solving the problem of super-resolution. Contrary to single image approaches, where high-frequency content has to be hallucinated to be the most likely source of the downscaled version, sub-aperture views from the light field can help with an actual reconstruction of those details that have been removed by downsampling. In this paper, we propose a three-dimensional generative adversarial autoencoder network to recover the high-resolution light field from a low-resolution light field with a sparse set of viewpoints. We require only three views along both horizontal and vertical axis to increase angular resolution by a factor of three while at the same time increasing spatial resolution by a factor of either two or four in each direction, respectively.
Cite
Text
Zhu et al. "An Epipolar Volume Autoencoder with Adversarial Loss for Deep Light Field Super-Resolution." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019. doi:10.1109/CVPRW.2019.00236Markdown
[Zhu et al. "An Epipolar Volume Autoencoder with Adversarial Loss for Deep Light Field Super-Resolution." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/zhu2019cvprw-epipolar/) doi:10.1109/CVPRW.2019.00236BibTeX
@inproceedings{zhu2019cvprw-epipolar,
title = {{An Epipolar Volume Autoencoder with Adversarial Loss for Deep Light Field Super-Resolution}},
author = {Zhu, Minchen and Alperovich, Anna and Johannsen, Ole and Sulc, Antonin and Goldluecke, Bastian},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2019},
pages = {1853-1861},
doi = {10.1109/CVPRW.2019.00236},
url = {https://mlanthology.org/cvprw/2019/zhu2019cvprw-epipolar/}
}