Learned Variational Video Color Propagation

Abstract

In this paper, we propose a novel method for color propagation that is used to recolor gray-scale videos (e.g. historic movies). Our energy-based model combines deep learning with a variational formulation. At its core, the method optimizes over a set of plausible color proposals that are extracted from motion and semantic feature matches, together with a learned regularizer that resolves color ambiguities by enforcing spatial color smoothness. Our approach allows interpreting intermediate results and to incorporate extensions like using multiple reference frames even after training. We achieve state-of-the-art results on a number of standard benchmark datasets with multiple metrics and also provide convincing results on real historical videos - even though such types of video are not present during training. Moreover, a user evaluation shows that our method propagates initial colors more faithfully and temporally consistent.

Cite

Text

Hofinger et al. "Learned Variational Video Color Propagation." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-20050-2_30

Markdown

[Hofinger et al. "Learned Variational Video Color Propagation." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/hofinger2022eccv-learned/) doi:10.1007/978-3-031-20050-2_30

BibTeX

@inproceedings{hofinger2022eccv-learned,
  title     = {{Learned Variational Video Color Propagation}},
  author    = {Hofinger, Markus and Kobler, Erich and Effland, Alexander and Pock, Thomas},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2022},
  doi       = {10.1007/978-3-031-20050-2_30},
  url       = {https://mlanthology.org/eccv/2022/hofinger2022eccv-learned/}
}