Recurrent Color Constancy
Abstract
We introduce a novel formulation of temporal color constancy which considers multiple frames preceding the frame for which illumination is estimated. We propose an end-to-end trainable recurrent color constancy network -- the RCC-Net -- which exploits convolutional LSTMs and a simulated sequence to learn compositional representations in space and time. We use a standard single frame color constancy benchmark, the SFU Gray Ball Dataset, which can be adapted to a temporal setting. Extensive experiments show that the proposed method consistently outperforms single-frame state-of-the-art methods and their temporal variants.
Cite
Text
Qian et al. "Recurrent Color Constancy." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.582Markdown
[Qian et al. "Recurrent Color Constancy." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/qian2017iccv-recurrent/) doi:10.1109/ICCV.2017.582BibTeX
@inproceedings{qian2017iccv-recurrent,
title = {{Recurrent Color Constancy}},
author = {Qian, Yanlin and Chen, Ke and Nikkanen, Jarno and Kamarainen, Joni-Kristian and Matas, Jiri},
booktitle = {International Conference on Computer Vision},
year = {2017},
doi = {10.1109/ICCV.2017.582},
url = {https://mlanthology.org/iccv/2017/qian2017iccv-recurrent/}
}