Graph2Pix: A Graph-Based Image to Image Translation Framework
Abstract
In this paper, we propose a graph-based image-to-image translation framework for generating images. We use rich data collected from the popular creativity platform Artbreeder1, where users interpolate multiple GAN-generated images to create artworks. This unique approach of creating new images leads to a tree-like structure where one can track historical data about the creation of a particular image. Inspired by this structure, we propose a novel graph-to-image translation model called Graph2Pix, which takes a graph and corresponding images as input and generates a single image as output. Our experiments show that Graph2Pix is able to outperform several image-to-image translation frameworks on benchmark metrics, including LPIPS (with a 25% improvement) and human perception studies (n = 60), where users preferred the images generated by our method 81.5% of the time. Our source code and dataset are publicly available at https://github.com/catlab-team/graph2pix.
Cite
Text
Gokay et al. "Graph2Pix: A Graph-Based Image to Image Translation Framework." IEEE/CVF International Conference on Computer Vision Workshops, 2021. doi:10.1109/ICCVW54120.2021.00227Markdown
[Gokay et al. "Graph2Pix: A Graph-Based Image to Image Translation Framework." IEEE/CVF International Conference on Computer Vision Workshops, 2021.](https://mlanthology.org/iccvw/2021/gokay2021iccvw-graph2pix/) doi:10.1109/ICCVW54120.2021.00227BibTeX
@inproceedings{gokay2021iccvw-graph2pix,
title = {{Graph2Pix: A Graph-Based Image to Image Translation Framework}},
author = {Gokay, Dilara and Simsar, Enis and Atici, Efehan and Ahmetoglu, Alper and Yüksel, Atif Emre and Yanardag, Pinar},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2021},
pages = {2001-2010},
doi = {10.1109/ICCVW54120.2021.00227},
url = {https://mlanthology.org/iccvw/2021/gokay2021iccvw-graph2pix/}
}