Model-Based Occlusion Disentanglement for Image-to-Image Translation
Abstract
Image-to-image translation is affected by entanglement phenomena, which may occur in case of target data encompassing occlusions such as raindrops, dirt, etc. Our unsupervised model-based learning disentangles scene and occlusions, while benefiting from an adversarial pipeline to regress physical parameters of the occlusion model. The experiments demonstrate our method is able to handle varying types of occlusions and generate highly realistic translations, qualitatively and quantitatively outperforming the state-of-the-art on multiple datasets.
Cite
Text
Pizzati et al. "Model-Based Occlusion Disentanglement for Image-to-Image Translation." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58565-5_27Markdown
[Pizzati et al. "Model-Based Occlusion Disentanglement for Image-to-Image Translation." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/pizzati2020eccv-modelbased/) doi:10.1007/978-3-030-58565-5_27BibTeX
@inproceedings{pizzati2020eccv-modelbased,
title = {{Model-Based Occlusion Disentanglement for Image-to-Image Translation}},
author = {Pizzati, Fabio and Cerri, Pietro and de Charette, Raoul},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58565-5_27},
url = {https://mlanthology.org/eccv/2020/pizzati2020eccv-modelbased/}
}