Texture Attribute Synthesis and Transfer Using Feed-Forward CNNs
Abstract
We present a novel technique for texture synthesis and style transfer based on convolutional neural networks (CNNs). Our method learns feed-forward image generators that correspond to specification of styles and textures in terms of high-level describable attributes such as 'striped', 'dotted', or 'veined'. Two key conceptual advantages over template-based approaches are that attributes can be analyzed and activated individually, while a template image necessarily represents a simultaneous specification of many attributes, and that attributes can combine aspects of many texture templates allowing flexibility in the generation process. Once the attribute-wise networks are trained, applications to texture synthesis and style transfer are fast, allowing for real-time video processing.
Cite
Text
Irmer et al. "Texture Attribute Synthesis and Transfer Using Feed-Forward CNNs." IEEE/CVF Winter Conference on Applications of Computer Vision, 2017. doi:10.1109/WACV.2017.100Markdown
[Irmer et al. "Texture Attribute Synthesis and Transfer Using Feed-Forward CNNs." IEEE/CVF Winter Conference on Applications of Computer Vision, 2017.](https://mlanthology.org/wacv/2017/irmer2017wacv-texture/) doi:10.1109/WACV.2017.100BibTeX
@inproceedings{irmer2017wacv-texture,
title = {{Texture Attribute Synthesis and Transfer Using Feed-Forward CNNs}},
author = {Irmer, Thomas and Glasmachers, Tobias and Maji, Subhransu},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2017},
pages = {852-861},
doi = {10.1109/WACV.2017.100},
url = {https://mlanthology.org/wacv/2017/irmer2017wacv-texture/}
}