Semi-Supervised StyleGAN for Disentanglement Learning
Abstract
Disentanglement learning is crucial for obtaining disentangled representations and controllable generation. Current disentanglement methods face several inherent limitations: difficulty with high-resolution images, primarily focusing on learning disentangled representations, and non-identifiability due to the unsupervised setting. To alleviate these limitations, we design new architectures and loss functions based on StyleGAN (Karras et al., 2019), for semi-supervised high-resolution disentanglement learning. We create two complex high-resolution synthetic datasets for systematic testing. We investigate the impact of limited supervision and find that using only 0.25% 2.5% of labeled data is sufficient for good disentanglement on both synthetic and real datasets. We propose new metrics to quantify generator controllability, and observe there may exist a crucial trade-off between disentangled representation learning and controllable generation. We also consider semantic fine-grained image editing to achieve better generalization to unseen images.
Cite
Text
Nie et al. "Semi-Supervised StyleGAN for Disentanglement Learning." International Conference on Machine Learning, 2020.Markdown
[Nie et al. "Semi-Supervised StyleGAN for Disentanglement Learning." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/nie2020icml-semisupervised/)BibTeX
@inproceedings{nie2020icml-semisupervised,
title = {{Semi-Supervised StyleGAN for Disentanglement Learning}},
author = {Nie, Weili and Karras, Tero and Garg, Animesh and Debnath, Shoubhik and Patney, Anjul and Patel, Ankit and Anandkumar, Animashree},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {7360-7369},
volume = {119},
url = {https://mlanthology.org/icml/2020/nie2020icml-semisupervised/}
}