High-Fidelity Image Inpainting with GAN Inversion
Abstract
Image inpainting seeks a semantically consistent way to recover the corrupted image in the light of its unmasked content. Previous approaches usually reuse the well-trained GAN as effective prior to generate realistic patches for missing holes with GAN inversion. Nevertheless, the ignorance of hard constraint in these algorithms may yield the gap between GAN inversion and image inpainting. Addressing this problem, in this paper we devise a novel GAN inversion model for image inpainting, dubbed {\it InvertFill}, mainly consisting of an encoder with a pre-modulation module and a GAN generator with F&W+ latent space. Within the encoder, the pre-modulation network leverages multi-scale structures to encode more discriminative semantic into style vectors. In order to bridge the gap between GAN inversion and image inpainting, F&W+ latent space is proposed to eliminate glaring color discrepancy and semantic inconsistency. To reconstruct faithful and photorealistic images, a simple yet effective Soft-update Mean Latent module is designed to capture more diverse in-domain patterns that synthesize high-fidelity textures for large corruptions. Comprehensive experiments on four challenging dataset, including Places2, CelebA-HQ, MetFaces, and Scenery, demonstrate that our InvertFill outperforms the advanced approaches qualitatively and quantitatively and supports the completion of out-of-domain images well. All codes, models and results will be made available upon the acceptance.
Cite
Text
Yu et al. "High-Fidelity Image Inpainting with GAN Inversion." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19787-1_14Markdown
[Yu et al. "High-Fidelity Image Inpainting with GAN Inversion." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/yu2022eccv-highfidelity/) doi:10.1007/978-3-031-19787-1_14BibTeX
@inproceedings{yu2022eccv-highfidelity,
title = {{High-Fidelity Image Inpainting with GAN Inversion}},
author = {Yu, Yongsheng and Zhang, Libo and Fan, Heng and Luo, Tiejian},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-19787-1_14},
url = {https://mlanthology.org/eccv/2022/yu2022eccv-highfidelity/}
}