Joint Face Representation Adaptation and Clustering in Videos

Abstract

Clustering faces in movies or videos is extremely challenging since characters’ appearance can vary drastically under different scenes. In addition, the various cinematic styles make it difficult to learn a universal face representation for all videos. Unlike previous methods that assume fixed handcrafted features for face clustering, in this work, we formulate a joint face representation adaptation and clustering approach in a deep learning framework. The proposed method allows face representation to gradually adapt from an external source domain to a target video domain. The adaptation of deep representation is achieved without any strong supervision but through iteratively discovered weak pairwise identity constraints derived from potentially noisy face clustering result. Experiments on three benchmark video datasets demonstrate that our approach generates character clusters with high purity compared to existing video face clustering methods, which are either based on deep face representation (without adaptation) or carefully engineered features.

Cite

Text

Zhang et al. "Joint Face Representation Adaptation and Clustering in Videos." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46487-9_15

Markdown

[Zhang et al. "Joint Face Representation Adaptation and Clustering in Videos." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/zhang2016eccv-joint/) doi:10.1007/978-3-319-46487-9_15

BibTeX

@inproceedings{zhang2016eccv-joint,
  title     = {{Joint Face Representation Adaptation and Clustering in Videos}},
  author    = {Zhang, Zhanpeng and Luo, Ping and Loy, Chen Change and Tang, Xiaoou},
  booktitle = {European Conference on Computer Vision},
  year      = {2016},
  pages     = {236-251},
  doi       = {10.1007/978-3-319-46487-9_15},
  url       = {https://mlanthology.org/eccv/2016/zhang2016eccv-joint/}
}