Recycle-GAN: Unsupervised Video Retargeting

Abstract

We introduce a data-driven approach for unsupervised video retargeting that translates content from one domain to another while preserving the style native to a domain, i.e., if contents of John Oliver's speech were to be transferred to Stephen Colbert, then the generated content/speech should be in Stephen Colbert's style. Our approach combines both spatial and temporal information along with adversarial losses for content translation and style preservation. In this work, we first study the advantages of using spatiotemporal constraints over spatial constraints for effective retargeting. We then demonstrate the proposed approach for the problems where information in both space and time matters such as face-to-face translation, flower-to-flower, wind and cloud synthesis, sunrise and sunset.

Cite

Text

Bansal et al. "Recycle-GAN: Unsupervised Video Retargeting." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01228-1_8

Markdown

[Bansal et al. "Recycle-GAN: Unsupervised Video Retargeting." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/bansal2018eccv-recyclegan/) doi:10.1007/978-3-030-01228-1_8

BibTeX

@inproceedings{bansal2018eccv-recyclegan,
  title     = {{Recycle-GAN: Unsupervised Video Retargeting}},
  author    = {Bansal, Aayush and Ma, Shugao and Ramanan, Deva and Sheikh, Yaser},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2018},
  doi       = {10.1007/978-3-030-01228-1_8},
  url       = {https://mlanthology.org/eccv/2018/bansal2018eccv-recyclegan/}
}