Iterative Feature Transformation for Fast and Versatile Universal Style Transfer

Abstract

The general framework for fast universal style transfer consists of an autoencoder and a feature transformation at the bottleneck. We propose a new transformation that iteratively stylizes features with analytical gradient descent. Experiments show this transformation is advantageous in part because it is fast. With control knobs to balance content preservation and style effect transferal, we also show this method can switch between artistic and photo-realistic style transfers and reduce distortion and artifacts. Finally, we show it can be used for applications requiring spatial control and multiple-style transfer.

Cite

Text

Chiu and Gurari. "Iterative Feature Transformation for Fast and Versatile Universal Style Transfer." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58529-7_11

Markdown

[Chiu and Gurari. "Iterative Feature Transformation for Fast and Versatile Universal Style Transfer." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/chiu2020eccv-iterative/) doi:10.1007/978-3-030-58529-7_11

BibTeX

@inproceedings{chiu2020eccv-iterative,
  title     = {{Iterative Feature Transformation for Fast and Versatile Universal Style Transfer}},
  author    = {Chiu, Tai-Yin and Gurari, Danna},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58529-7_11},
  url       = {https://mlanthology.org/eccv/2020/chiu2020eccv-iterative/}
}