Attribute2Image: Conditional Image Generation from Visual Attributes
Abstract
This paper investigates a novel problem of generating images from visual attributes. We model the image as a composite of foreground and background and develop a layered generative model with disentangled latent variables that can be learned end-to-end using a variational auto-encoder. We experiment with natural images of faces and birds and demonstrate that the proposed models are capable of generating realistic and diverse samples with disentangled latent representations. We use a general energy minimization algorithm for posterior inference of latent variables given novel images. Therefore, the learned generative models show excellent quantitative and visual results in the tasks of attribute-conditioned image reconstruction and completion.
Cite
Text
Yan et al. "Attribute2Image: Conditional Image Generation from Visual Attributes." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46493-0_47Markdown
[Yan et al. "Attribute2Image: Conditional Image Generation from Visual Attributes." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/yan2016eccv-attribute/) doi:10.1007/978-3-319-46493-0_47BibTeX
@inproceedings{yan2016eccv-attribute,
title = {{Attribute2Image: Conditional Image Generation from Visual Attributes}},
author = {Yan, Xinchen and Yang, Jimei and Sohn, Kihyuk and Lee, Honglak},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {776-791},
doi = {10.1007/978-3-319-46493-0_47},
url = {https://mlanthology.org/eccv/2016/yan2016eccv-attribute/}
}