Neural Photo Editing with Introspective Adversarial Networks
Abstract
The increasingly photorealistic sample quality of generative image models suggests their feasibility in applications beyond image generation. We present the Neural Photo Editor, an interface that leverages the power of generative neural networks to make large, semantically coherent changes to existing images. To tackle the challenge of achieving accurate reconstructions without loss of feature quality, we introduce the Introspective Adversarial Network, a novel hybridization of the VAE and GAN. Our model efficiently captures long-range dependencies through use of a computational block based on weight-shared dilated convolutions, and improves generalization performance with Orthogonal Regularization, a novel weight regularization method. We validate our contributions on CelebA, SVHN, and CIFAR-100, and produce samples and reconstructions with high visual fidelity.
Cite
Text
Brock et al. "Neural Photo Editing with Introspective Adversarial Networks." International Conference on Learning Representations, 2017.Markdown
[Brock et al. "Neural Photo Editing with Introspective Adversarial Networks." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/brock2017iclr-neural/)BibTeX
@inproceedings{brock2017iclr-neural,
title = {{Neural Photo Editing with Introspective Adversarial Networks}},
author = {Brock, Andrew and Lim, Theodore and Ritchie, James M. and Weston, Nick},
booktitle = {International Conference on Learning Representations},
year = {2017},
url = {https://mlanthology.org/iclr/2017/brock2017iclr-neural/}
}