Versatile Multi-Modal Pre-Training for Human-Centric Perception
Abstract
Human-centric perception plays a vital role in vision and graphics. But their data annotations are prohibitively expensive. Therefore, it is desirable to have a versatile pre-train model that serves as a foundation for data-efficient downstream tasks transfer. To this end, we propose the Human-Centric Multi-Modal Contrastive Learning framework HCMoCo that leverages the multi-modal nature of human data (e.g. RGB, depth, 2D keypoints) for effective representation learning. The objective comes with two main challenges: dense pre-train for multi-modality data, efficient usage of sparse human priors. To tackle the challenges, we design the novel Dense Intra-sample Contrastive Learning and Sparse Structure-aware Contrastive Learning targets by hierarchically learning a modal-invariant latent space featured with continuous and ordinal feature distribution and structure-aware semantic consistency. HCMoCo provides pre-train for different modalities by combining heterogeneous datasets, which allows efficient usage of existing task-specific human data. Extensive experiments on four downstream tasks of different modalities demonstrate the effectiveness of HCMoCo, especially under data-efficient settings (7.16% and 12% improvement on DensePose Estimation and Human Parsing). Moreover, we demonstrate the versatility of HCMoCo by exploring cross-modality supervision and missing-modality inference, validating its strong ability in cross-modal association and reasoning.
Cite
Text
Hong et al. "Versatile Multi-Modal Pre-Training for Human-Centric Perception." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01568Markdown
[Hong et al. "Versatile Multi-Modal Pre-Training for Human-Centric Perception." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/hong2022cvpr-versatile/) doi:10.1109/CVPR52688.2022.01568BibTeX
@inproceedings{hong2022cvpr-versatile,
title = {{Versatile Multi-Modal Pre-Training for Human-Centric Perception}},
author = {Hong, Fangzhou and Pan, Liang and Cai, Zhongang and Liu, Ziwei},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {16156-16166},
doi = {10.1109/CVPR52688.2022.01568},
url = {https://mlanthology.org/cvpr/2022/hong2022cvpr-versatile/}
}