Intermediate Layer Optimization for Inverse Problems Using Deep Generative Models
Abstract
We propose Intermediate Layer Optimization, a novel optimization algorithm for solving inverse problems with deep generative models. Instead of optimizing only over the initial latent code, we progressively change the input layer we optimize over, obtaining successively more expressive generators. We also experiment with different loss functions and utilize a perceptual loss combined with standard mean squared error. We empirically show that our approach outperforms the state-of-the-art inversion methods introduced in StyleGAN-2 and PULSE.
Cite
Text
Dean et al. "Intermediate Layer Optimization for Inverse Problems Using Deep Generative Models." NeurIPS 2020 Workshops: Deep_Inverse, 2020.Markdown
[Dean et al. "Intermediate Layer Optimization for Inverse Problems Using Deep Generative Models." NeurIPS 2020 Workshops: Deep_Inverse, 2020.](https://mlanthology.org/neuripsw/2020/dean2020neuripsw-intermediate/)BibTeX
@inproceedings{dean2020neuripsw-intermediate,
title = {{Intermediate Layer Optimization for Inverse Problems Using Deep Generative Models}},
author = {Dean, Joseph and Daras, Giannis and Dimakis, Alex},
booktitle = {NeurIPS 2020 Workshops: Deep_Inverse},
year = {2020},
url = {https://mlanthology.org/neuripsw/2020/dean2020neuripsw-intermediate/}
}