Learning Color Representations for Low-Light Image Enhancement
Abstract
Color conveys important information about the visible world. However, under low-light conditions, both pixel intensity, as well as true color distribution, can be significantly shifted. Moreover, most of such distortions are non-recoverable due to inverse problems. In the present study, we utilized recent advancements in learning-based methods for low-light image enhancement. However, while most "deep learning" methods aim to restore high-level and object-oriented visual information, we hypothesized that learning-based methods can also be used for restoring color-based information. To address this question, we propose a novel color representation learning method for low-light image enhancement. More specifically, we used a channel-aware residual network and a differentiable intensity histogram to capture color features. Experimental results using synthetic and natural datasets suggest that the proposed learning scheme achieves state-of-the-art performance. We conclude from our study that inter-channel dependency and color distribution matching are crucial factors for learning color representations under low-light conditions.
Cite
Text
Kim et al. "Learning Color Representations for Low-Light Image Enhancement." Winter Conference on Applications of Computer Vision, 2022.Markdown
[Kim et al. "Learning Color Representations for Low-Light Image Enhancement." Winter Conference on Applications of Computer Vision, 2022.](https://mlanthology.org/wacv/2022/kim2022wacv-learning/)BibTeX
@inproceedings{kim2022wacv-learning,
title = {{Learning Color Representations for Low-Light Image Enhancement}},
author = {Kim, Bomi and Lee, Sunhyeok and Kim, Nahyun and Jang, Donggon and Kim, Dae-Shik},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2022},
pages = {1455-1463},
url = {https://mlanthology.org/wacv/2022/kim2022wacv-learning/}
}