Improved StyleGAN-V2 Based Inversion for Out-of-Distribution Images
Abstract
Inverting an image onto the latent space of pre-trained generators, e.g., StyleGAN-v2, has emerged as a popular strategy to leverage strong image priors for ill-posed restoration. Several studies have showed that this approach is effective at inverting images similar to the data used for training. However, with out-of-distribution (OOD) data that the generator has not been exposed to, existing inversion techniques produce sub-optimal results. In this paper, we propose SPHInX (StyleGAN with Projection Heads for Inverting X), an approach for accurately embedding OOD images onto the StyleGAN latent space. SPHInX optimizes a style projection head using a novel training strategy that imposes a vicinal regularization in the StyleGAN latent space. To further enhance OOD inversion, SPHInX can additionally optimize a content projection head and noise variables in every layer. Our empirical studies on a suite of OOD data show that, in addition to producing higher quality reconstructions over the state-of-the-art inversion techniques, SPHInX is effective for ill-posed restoration tasks while offering semantic editing capabilities.
Cite
Text
Subramanyam et al. "Improved StyleGAN-V2 Based Inversion for Out-of-Distribution Images." International Conference on Machine Learning, 2022.Markdown
[Subramanyam et al. "Improved StyleGAN-V2 Based Inversion for Out-of-Distribution Images." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/subramanyam2022icml-improved/)BibTeX
@inproceedings{subramanyam2022icml-improved,
title = {{Improved StyleGAN-V2 Based Inversion for Out-of-Distribution Images}},
author = {Subramanyam, Rakshith and Narayanaswamy, Vivek and Naufel, Mark and Spanias, Andreas and Thiagarajan, Jayaraman J.},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {20625-20639},
volume = {162},
url = {https://mlanthology.org/icml/2022/subramanyam2022icml-improved/}
}