Deep Portrait Delighting
Abstract
We present a deep neural network for removing undesirable shading features from an unconstrained portrait image, recovering the underlying texture. Our training scheme incorporates three regularization strategies: masked loss, to emphasize high-frequency shading features; soft-shadow loss, which improves sensitivity to subtle changes in lighting; and shading-offset estimation, to supervise separation of shading and texture. Our method demonstrates improved delighting quality and generalization when compared with the state-of-the-art. We further demonstrate how our delighting method can enhance the performance of light-sensitive computer vision tasks such as face relighting and semantic parsing, allowing them to handle extreme lighting conditions.
Cite
Text
Weir et al. "Deep Portrait Delighting." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19787-1_24Markdown
[Weir et al. "Deep Portrait Delighting." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/weir2022eccv-deep/) doi:10.1007/978-3-031-19787-1_24BibTeX
@inproceedings{weir2022eccv-deep,
title = {{Deep Portrait Delighting}},
author = {Weir, Joshua and Zhao, Junhong and Chalmers, Andrew and Rhee, Taehyun},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-19787-1_24},
url = {https://mlanthology.org/eccv/2022/weir2022eccv-deep/}
}