Texture Networks: Feed-Forward Synthesis of Textures and Stylized Images
Abstract
Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.
Cite
Text
Ulyanov et al. "Texture Networks: Feed-Forward Synthesis of Textures and Stylized Images." International Conference on Machine Learning, 2016.Markdown
[Ulyanov et al. "Texture Networks: Feed-Forward Synthesis of Textures and Stylized Images." International Conference on Machine Learning, 2016.](https://mlanthology.org/icml/2016/ulyanov2016icml-texture/)BibTeX
@inproceedings{ulyanov2016icml-texture,
title = {{Texture Networks: Feed-Forward Synthesis of Textures and Stylized Images}},
author = {Ulyanov, Dmitry and Lebedev, Vadim and Andrea, and Lempitsky, Victor},
booktitle = {International Conference on Machine Learning},
year = {2016},
pages = {1349-1357},
volume = {48},
url = {https://mlanthology.org/icml/2016/ulyanov2016icml-texture/}
}