Learning to Warp for Style Transfer
Abstract
Since its inception in 2015, Style Transfer has focused on texturing a content image using an art exemplar. Recently, the geometric changes that artists make have been acknowledged as an important component of style. Our contribution is to propose a neural network that, uniquely, learns a mapping from a 4D array of inter-feature distances to a non-parametric 2D warp field. The system is generic in not being limited by semantic class, a single learned model will suffice; all examples in this paper are output from one model. Our approach combines the benefits of the high speed of Liu et al. with the non-parametric warping of Kim et al. Furthermore, our system extends the normal NST paradigm: although it can be used with a single exemplar, we also allow two style exemplars: one for texture and another for geometry. This supports far greater flexibility in use cases than single exemplars can provide.
Cite
Text
Liu et al. "Learning to Warp for Style Transfer." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00370Markdown
[Liu et al. "Learning to Warp for Style Transfer." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/liu2021cvpr-learning/) doi:10.1109/CVPR46437.2021.00370BibTeX
@inproceedings{liu2021cvpr-learning,
title = {{Learning to Warp for Style Transfer}},
author = {Liu, Xiao-Chang and Yang, Yong-Liang and Hall, Peter},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2021},
pages = {3702-3711},
doi = {10.1109/CVPR46437.2021.00370},
url = {https://mlanthology.org/cvpr/2021/liu2021cvpr-learning/}
}