Content-Consistent Generation of Realistic Eyes with Style
Abstract
Accurately labeled real-world training data can be scarce, and hence recent works adapt, modify or generate images to boost target datasets. However, retaining relevant details from input data in the generated images is challenging and failure could be critical to the performance on the final task. In this work, we synthesize personspecific eye images that satisfy a given semantic segmentation mask (content), while following the style of a specified person from only a few reference images. We introduce two approaches, (a) one used to win the OpenEDS Synthetic Eye Generation Challenge at ICCV 2019, and (b) a principled approach to solving the problem involving simultaneous injection of style and content information at multiple scales. Our implementation is available at https://github.com/mcbuehler/Seg2Eye.
Cite
Text
Bühler et al. "Content-Consistent Generation of Realistic Eyes with Style." IEEE/CVF International Conference on Computer Vision Workshops, 2019. doi:10.1109/ICCVW48693.2019.9130178Markdown
[Bühler et al. "Content-Consistent Generation of Realistic Eyes with Style." IEEE/CVF International Conference on Computer Vision Workshops, 2019.](https://mlanthology.org/iccvw/2019/buhler2019iccvw-contentconsistent/) doi:10.1109/ICCVW48693.2019.9130178BibTeX
@inproceedings{buhler2019iccvw-contentconsistent,
title = {{Content-Consistent Generation of Realistic Eyes with Style}},
author = {Bühler, Marcel C. and Park, Seonwook and De Mello, Shalini and Zhang, Xucong and Hilliges, Otmar},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2019},
pages = {1-5},
doi = {10.1109/ICCVW48693.2019.9130178},
url = {https://mlanthology.org/iccvw/2019/buhler2019iccvw-contentconsistent/}
}