Seeing Dynamic Scene in the Dark: A High-Quality Video Dataset with Mechatronic Alignment

Abstract

Low-light video enhancement is an important task. Previous work is mostly trained on paired static images or videos. We compile a new dataset formed by our new strategy that contains high-quality spatially-aligned video pairs from dynamic scenes in low- and normal-light conditions. We built it using a mechatronic system to precisely control the dynamics during the video capture process, and further align the video pairs, both spatially and temporally, by identifying the system's uniform motion stage. Besides the dataset, we propose an end-to-end framework, in which we design a self-supervised strategy to reduce noise, while enhancing the illumination based on the Retinex theory. Extensive experiments based on various metrics and large-scale user study demonstrate the value of our dataset and effectiveness of our method. The dataset and code are available at https://github.com/dvlab-research/SDSD.

Cite

Text

Wang et al. "Seeing Dynamic Scene in the Dark: A High-Quality Video Dataset with Mechatronic Alignment." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00956

Markdown

[Wang et al. "Seeing Dynamic Scene in the Dark: A High-Quality Video Dataset with Mechatronic Alignment." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/wang2021iccv-seeing/) doi:10.1109/ICCV48922.2021.00956

BibTeX

@inproceedings{wang2021iccv-seeing,
  title     = {{Seeing Dynamic Scene in the Dark: A High-Quality Video Dataset with Mechatronic Alignment}},
  author    = {Wang, Ruixing and Xu, Xiaogang and Fu, Chi-Wing and Lu, Jiangbo and Yu, Bei and Jia, Jiaya},
  booktitle = {International Conference on Computer Vision},
  year      = {2021},
  pages     = {9700-9709},
  doi       = {10.1109/ICCV48922.2021.00956},
  url       = {https://mlanthology.org/iccv/2021/wang2021iccv-seeing/}
}