SA-AE for Any-to-Any Relighting
Abstract
In this paper, we present a novel automatic model Self-Attention AutoEncoder (SA-AE) for generating a relit image from a source image to match the illumination setting of a guide image, which is called any-to-any relighting. In order to reduce the difficulty of learning, we adopt an implicit scene representation learned by the encoder to render the relit image using the decoder. Based on the learned scene representation, a lighting estimation network is designed as a classification task to predict the illumination settings from the guide images. Also, a lighting-to-feature network is well designed to recover the corresponding implicit scene representation from the illumination settings, which is the inverse process of the lighting estimation network. In addition, a self-attention mechanism is introduced in the autoencoder to focus on the re-rendering of the relighting-related regions in the source images. Extensive experiments on the VIDIT dataset show that the proposed approach achieved the 1st place in terms of MPS and the 1st place in terms of SSIM in the AIM 2020 Any-to-any Relighting Challenge.
Cite
Text
Hu et al. "SA-AE for Any-to-Any Relighting." European Conference on Computer Vision Workshops, 2020. doi:10.1007/978-3-030-67070-2_32Markdown
[Hu et al. "SA-AE for Any-to-Any Relighting." European Conference on Computer Vision Workshops, 2020.](https://mlanthology.org/eccvw/2020/hu2020eccvw-saae/) doi:10.1007/978-3-030-67070-2_32BibTeX
@inproceedings{hu2020eccvw-saae,
title = {{SA-AE for Any-to-Any Relighting}},
author = {Hu, Zhongyun and Huang, Xin and Li, Yaning and Wang, Qing},
booktitle = {European Conference on Computer Vision Workshops},
year = {2020},
pages = {535-549},
doi = {10.1007/978-3-030-67070-2_32},
url = {https://mlanthology.org/eccvw/2020/hu2020eccvw-saae/}
}