General Facial Representation Learning in a Visual-Linguistic Manner

Abstract

How to learn a universal facial representation that boosts all face analysis tasks This paper takes one step toward this goal. In this paper, we study the transfer performance of pre-trained models on face analysis tasks and introduce a framework, called FaRL, for general facial representation learning. On one hand, the framework involves a contrastive loss to learn high-level semantic meaning from image-text pairs. On the other hand, we propose exploring low-level information simultaneously to further enhance the face representation by adding a masked image modeling. We perform pre-training on LAION-FACE, a dataset containing a large amount of face image-text pairs, and evaluate the representation capability on multiple downstream tasks. We show that FaRL achieves better transfer performance compared with previous pre-trained models. We also verify its superiority in the low-data regime. More importantly, our model surpasses the state-of-the-art methods on face analysis tasks including face parsing and face alignment.

Cite

Text

Zheng et al. "General Facial Representation Learning in a Visual-Linguistic Manner." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01814

Markdown

[Zheng et al. "General Facial Representation Learning in a Visual-Linguistic Manner." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/zheng2022cvpr-general/) doi:10.1109/CVPR52688.2022.01814

BibTeX

@inproceedings{zheng2022cvpr-general,
  title     = {{General Facial Representation Learning in a Visual-Linguistic Manner}},
  author    = {Zheng, Yinglin and Yang, Hao and Zhang, Ting and Bao, Jianmin and Chen, Dongdong and Huang, Yangyu and Yuan, Lu and Chen, Dong and Zeng, Ming and Wen, Fang},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {18697-18709},
  doi       = {10.1109/CVPR52688.2022.01814},
  url       = {https://mlanthology.org/cvpr/2022/zheng2022cvpr-general/}
}