Multi-Label Supervised Contrastive Learning
Abstract
Multi-label classification is an arduous problem given the complication in label correlation. Whilst sharing a common goal with contrastive learning in utilizing correlations for representation learning, how to better leverage label information remains challenging. Previous endeavors include extracting label-level presentations or mapping labels to an embedding space, overlooking the correlation between multiple labels. It exhibits a great ambiguity in determining positive samples with different extent of label overlap between samples and integrating such relations in loss functions. In our work, we propose Multi-Label Supervised Contrastive learning (MulSupCon) with a novel contrastive loss function to adjust weights based on how much overlap one sample shares with the anchor. By analyzing gradients, we explain why our method performs better under multi-label circumstances. To evaluate, we conduct direct classification and transfer learning on several multi-label datasets, including widely-used image datasets such as MS-COCO and NUS-WIDE. Validation indicates that our method outperforms the traditional multi-label classification method and shows a competitive performance when comparing to other existing approaches.
Cite
Text
Zhang and Wu. "Multi-Label Supervised Contrastive Learning." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I15.29619Markdown
[Zhang and Wu. "Multi-Label Supervised Contrastive Learning." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/zhang2024aaai-multi-c/) doi:10.1609/AAAI.V38I15.29619BibTeX
@inproceedings{zhang2024aaai-multi-c,
title = {{Multi-Label Supervised Contrastive Learning}},
author = {Zhang, Pingyue and Wu, Mengyue},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {16786-16793},
doi = {10.1609/AAAI.V38I15.29619},
url = {https://mlanthology.org/aaai/2024/zhang2024aaai-multi-c/}
}