Multi-Frame GAN: Image Enhancement for Stereo Visual Odometry in Low Light
Abstract
We propose the concept of a multi-frame GAN (MFGAN) and demonstrate its potential as an image sequence enhancement for stereo visual odometry in low light conditions. We base our method on an invertible adversarial network to transfer the beneficial features of brightly illuminated scenes to the sequence in poor illumination without costly paired datasets. In order to preserve the coherent geometric cues for the translated sequence, we present a novel network architecture as well as a novel loss term combining temporal and stereo consistencies based on optical flow estimation. We demonstrate that the enhanced sequences improve the performance of state-of-the-art feature-based and direct stereo visual odometry methods on both synthetic and real datasets in challenging illumination. We also show that MFGAN outperforms other state-of-the-art image enhancement and style transfer methods by a large margin in terms of visual odometry.
Cite
Text
Jung et al. "Multi-Frame GAN: Image Enhancement for Stereo Visual Odometry in Low Light." Conference on Robot Learning, 2019.Markdown
[Jung et al. "Multi-Frame GAN: Image Enhancement for Stereo Visual Odometry in Low Light." Conference on Robot Learning, 2019.](https://mlanthology.org/corl/2019/jung2019corl-multiframe/)BibTeX
@inproceedings{jung2019corl-multiframe,
title = {{Multi-Frame GAN: Image Enhancement for Stereo Visual Odometry in Low Light}},
author = {Jung, Eunah and Yang, Nan and Cremers, Daniel},
booktitle = {Conference on Robot Learning},
year = {2019},
pages = {651-660},
volume = {100},
url = {https://mlanthology.org/corl/2019/jung2019corl-multiframe/}
}