C4Synth: Cross-Caption Cycle-Consistent Text-to-Image Synthesis

Abstract

Generating an image from its description is a challenging task worth solving because of its numerous practical applications ranging from image editing to virtual reality. All existing methods use one single caption to generate a plausible image. A single caption by itself, can be limited and may not be able to capture the variety of concepts and behavior that would be present in the image. We propose two deep generative models that generate an image by making use of multiple captions describing it. This is achieved by ensuring 'Cross-Caption Cycle Consistency' between the multiple captions and the generated image(s). We report quantitative and qualitative results on the standard Caltech-UCSD Birds (CUB) and Oxford-102 Flowers datasets to validate the efficacy of the proposed approach.

Cite

Text

Joseph et al. "C4Synth: Cross-Caption Cycle-Consistent Text-to-Image Synthesis." IEEE/CVF Winter Conference on Applications of Computer Vision, 2019. doi:10.1109/WACV.2019.00044

Markdown

[Joseph et al. "C4Synth: Cross-Caption Cycle-Consistent Text-to-Image Synthesis." IEEE/CVF Winter Conference on Applications of Computer Vision, 2019.](https://mlanthology.org/wacv/2019/joseph2019wacv-c/) doi:10.1109/WACV.2019.00044

BibTeX

@inproceedings{joseph2019wacv-c,
  title     = {{C4Synth: Cross-Caption Cycle-Consistent Text-to-Image Synthesis}},
  author    = {Joseph, K. J. and Pal, Arghya and Rajanala, Sailaja and Balasubramanian, Vineeth N.},
  booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
  year      = {2019},
  pages     = {358-366},
  doi       = {10.1109/WACV.2019.00044},
  url       = {https://mlanthology.org/wacv/2019/joseph2019wacv-c/}
}