Learning a Practical SDR-to-HDRTV Up-Conversion Using New Dataset and Degradation Models

Abstract

In media industry, the demand of SDR-to-HDRTV up-conversion arises when users possess HDR-WCG (high dynamic range-wide color gamut) TVs while most off-the-shelf footage is still in SDR (standard dynamic range). The research community has started tackling this low-level vision task by learning-based approaches. When applied to real SDR, yet, current methods tend to produce dim and desaturated result, making nearly no improvement on viewing experience. Different from other network-oriented methods, we attribute such deficiency to training set (HDR-SDR pair). Consequently, we propose new HDRTV dataset (dubbed HDRTV4K) and new HDR-to-SDR degradation models. Then, it's used to train a luminance-segmented network (LSN) consisting of a global mapping trunk, and two Transformer branches on bright and dark luminance range. We also update assessment criteria by tailored metrics and subjective experiment. Finally, ablation studies are conducted to prove the effectiveness. Our work is available at: https://github.com/AndreGuo/HDRTVDM.

Cite

Text

Guo et al. "Learning a Practical SDR-to-HDRTV Up-Conversion Using New Dataset and Degradation Models." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.02129

Markdown

[Guo et al. "Learning a Practical SDR-to-HDRTV Up-Conversion Using New Dataset and Degradation Models." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/guo2023cvpr-learning/) doi:10.1109/CVPR52729.2023.02129

BibTeX

@inproceedings{guo2023cvpr-learning,
  title     = {{Learning a Practical SDR-to-HDRTV Up-Conversion Using New Dataset and Degradation Models}},
  author    = {Guo, Cheng and Fan, Leidong and Xue, Ziyu and Jiang, Xiuhua},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {22231-22241},
  doi       = {10.1109/CVPR52729.2023.02129},
  url       = {https://mlanthology.org/cvpr/2023/guo2023cvpr-learning/}
}