Fast Video Multi-Style Transfer
Abstract
Recent progresses in video style transfer have shown promising results which contain less flickering effects. However, existing algorithms mainly trade off generality for efficiency, i.e., constructing one network per style example, and often work well for short video clips only. Specifically, we design a multi-instance normalization block (MIN-Block) to learn different style examples and a ConvLSTM module to encourage the temporal consistency. The proposed algorithm is demonstrated to be able to generate temporally-consistent video transfer results in different styles while keeping each stylized frame visually pleasing. Extensive experimental results show that the proposed method performs favorably again single-style models and some post-processing techniques that alleviate the flickering issue. We achieve as many as 120 stylization effects in a single model and show results on long-term videos that consist of thousands of frames.
Cite
Text
Gao et al. "Fast Video Multi-Style Transfer." Winter Conference on Applications of Computer Vision, 2020.Markdown
[Gao et al. "Fast Video Multi-Style Transfer." Winter Conference on Applications of Computer Vision, 2020.](https://mlanthology.org/wacv/2020/gao2020wacv-fast/)BibTeX
@inproceedings{gao2020wacv-fast,
title = {{Fast Video Multi-Style Transfer}},
author = {Gao, Wei and Li, Yijun and Yin, Yihang and Yang, Ming-Hsuan},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2020},
url = {https://mlanthology.org/wacv/2020/gao2020wacv-fast/}
}