S3Net: A Single Stream Structure for Depth Guided Image Relighting

Abstract

Depth guided any-to-any image relighting aims to generate a relit image from the original image and corresponding depth maps to match the illumination setting of the given guided image and its depth map. To the best of our knowledge, this task is a new challenge that has not been addressed in the previous literature. To address this issue, we propose a deep learning-based neural Single Stream Structure network called S3Net for depth guided image relighting. This network is an encoder-decoder model. We concatenate all images and corresponding depth maps as the input and feed them into the model. The decoder part contains the attention module and the enhanced module to focus on the relighting-related regions in the guided images. Experiments performed on challenging benchmark show that the proposed model achieves the 3rd highest SSIM in the NTIRE 2021 Depth Guided Any-to-any Relighting Challenge.

Cite

Text

Yang et al. "S3Net: A Single Stream Structure for Depth Guided Image Relighting." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021. doi:10.1109/CVPRW53098.2021.00037

Markdown

[Yang et al. "S3Net: A Single Stream Structure for Depth Guided Image Relighting." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021.](https://mlanthology.org/cvprw/2021/yang2021cvprw-s3net/) doi:10.1109/CVPRW53098.2021.00037

BibTeX

@inproceedings{yang2021cvprw-s3net,
  title     = {{S3Net: A Single Stream Structure for Depth Guided Image Relighting}},
  author    = {Yang, Hao-Hsiang and Chen, Wei-Ting and Kuo, Sy-Yen},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2021},
  pages     = {276-283},
  doi       = {10.1109/CVPRW53098.2021.00037},
  url       = {https://mlanthology.org/cvprw/2021/yang2021cvprw-s3net/}
}