Personalized Cinemagraphs Using Semantic Understanding and Collaborative Learning

Abstract

Cinemagraphs are a compelling way to convey dynamic aspects of a scene. In these media, dynamic and still elements are juxtaposed to create an artistic and narrative experience. Creating a high-quality, aesthetically pleasing cinemagraph requires isolating objects in a semantically meaningful way and then selecting good start times and looping periods for those objects to minimize visual artifacts (such a tearing). To achieve this, we present a new technique that uses object recognition and semantic segmentation as part of an optimization method to automatically create cinemagraphs from videos that are both visually appealing and semantically meaningful. Given a scene with multiple objects, there are many cinemagraphs one could create. Our method evaluates these multiple candidates and presents the best one, as determined by a model trained to predict human preferences in a collaborative way. We demonstrate the effectiveness of our approach with multiple results and a user study.

Cite

Text

Oh et al. "Personalized Cinemagraphs Using Semantic Understanding and Collaborative Learning." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.552

Markdown

[Oh et al. "Personalized Cinemagraphs Using Semantic Understanding and Collaborative Learning." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/oh2017iccv-personalized/) doi:10.1109/ICCV.2017.552

BibTeX

@inproceedings{oh2017iccv-personalized,
  title     = {{Personalized Cinemagraphs Using Semantic Understanding and Collaborative Learning}},
  author    = {Oh, Tae-Hyun and Joo, Kyungdon and Joshi, Neel and Wang, Baoyuan and Kweon, In So and Kang, Sing Bing},
  booktitle = {International Conference on Computer Vision},
  year      = {2017},
  doi       = {10.1109/ICCV.2017.552},
  url       = {https://mlanthology.org/iccv/2017/oh2017iccv-personalized/}
}