Use All the Labels: A Hierarchical Multi-Label Contrastive Learning Framework
Abstract
Current contrastive learning frameworks focus on leveraging a single supervisory signal to learn representations, which limits the efficacy on unseen data and downstream tasks. In this paper, we present a hierarchical multi-label representation learning framework that can leverage all available labels and preserve the hierarchical relationship between classes. We introduce novel hierarchy preserving losses, which jointly apply a hierarchical penalty to the contrastive loss, and enforce the hierarchy constraint. The loss function is data driven and automatically adapts to arbitrary multi-label structures. Experiments on several datasets show that our relationship-preserving embedding performs well on a variety of tasks and outperform the baseline supervised and self-supervised approaches. Code is available at https://github.com/salesforce/hierarchicalContrastiveLearning.
Cite
Text
Zhang et al. "Use All the Labels: A Hierarchical Multi-Label Contrastive Learning Framework." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01616Markdown
[Zhang et al. "Use All the Labels: A Hierarchical Multi-Label Contrastive Learning Framework." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/zhang2022cvpr-use/) doi:10.1109/CVPR52688.2022.01616BibTeX
@inproceedings{zhang2022cvpr-use,
title = {{Use All the Labels: A Hierarchical Multi-Label Contrastive Learning Framework}},
author = {Zhang, Shu and Xu, Ran and Xiong, Caiming and Ramaiah, Chetan},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {16660-16669},
doi = {10.1109/CVPR52688.2022.01616},
url = {https://mlanthology.org/cvpr/2022/zhang2022cvpr-use/}
}